Azure WebApps for Containers and AppSettings/Environment Variables - azure

I have a web app for containers running linux. Im running a docker container. It all works but I wanted to add an environment variable as follows:
docker run -e my_app_setting_var=theValue
The documentation says that app setting will be automatically added as -e environment variables here:
App Settings are injected into your app as environment variables at runtime
But as you can see from my logs it doesnt get added: (some stuff stripped out)
docker run -d -p 30174:5000 --name thename -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=5000 -e WEBSITE_SITE_NAME=the_website_name -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=the_role_id -e HTTP_LOGGING_ENABLED=1 acr_where_image_is.azurecr.io/image_name:latest Dockerfile
I would expect to see an environment variable like this:
docker run -d -p 30174:5000 --name thename -e my_app_setting_var=theValue
Any ideas?
Cheers

The Azure Web App for Container will inject the environment variables that you set in the portal or through the Azure CLI. But unfortunately, you will not see your environment variables in the logs. The logs will show you a little of the variables. You can just confirm with echo the variables in the tool just like Kudu, and you can see them in the environment.

Related

Azure App Service environment variables are not available in container

I have a Docker container running in Azure App Service. In my Azure App Service on Configuration section I have a list of defined variables for database connection. I want to use them in my docker container in wp-config.php file to connect to Azure MySQL database. However, the variables are not available in container at all.
I have tried reviewing them with 'env' or 'printenv' commands after adding the following line in my start script:
eval $(printenv | sed -n "s/^\([^=]\+\)=\(.*\)$/export \1=\2/p" | sed 's/"/\\\"/g' | sed '/=/s//="/' | sed 's/$/"/' >> /etc/profile)
No success, unfortunately. I have also tried getting them via PHP with getenv('variable_name') function, also no success. All of my environment variables are available in the Kudu container which is created alongside the actual container running my app within App Service, however, I cannot use them at all in my application container. The container itself is Apache + WordPress image and I am attaching my Dockerfile and startup script below:
Dockerfile:
FROM wordpress:4.9.1-apache
RUN apt-get update
RUN apt-get -y install openssh-server \
&& echo "root:Docker!" | chpasswd
COPY sshd_config /etc/ssh/
RUN mkdir -p /tmp
COPY ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
COPY . /var/www/html
RUN chown -R www-data:www-data /var/www/html/
ENV PORT 80
EXPOSE 80 2222
ADD start.sh /
RUN chmod +x /start.sh
ENTRYPOINT ["/start.sh"]
Start.sh:
#!/bin/bash
/usr/sbin/sshd
apache2-foreground
eval $(printenv | sed -n "s/^\([^=]\+\)=\(.*\)$/export \1=\2/p" | sed 's/"/\\\"/g' | sed '/=/s//="/' | sed 's/$/"/' >> /etc/profile)
UPD: The issue was due to the app setting that enabled persistent storage. Once I changed value from 'true' to 'false', for some reason, application settings started to work as environment variables.

Setting environment variables on docker before exec

I'm running a set of commands on an ubuntu docker, and I need to set a couple of environment variables for my scripts to work.
I have tried several alternatives but none of them seem to solve my problem.
Alternative 1: Using --env or --env-file
On my already running container, I run either:
docker exec -it --env TESTVAR="some_path" ai_pipeline_new_image bash -c "echo $TESTVAR"
docker exec -it --env-file env_vars ai_pipeline_new_image bash -c "echo $TESTVAR"
The content of env_vars:
TESTVAR="some_path"
In both cases the output is empty.
Alternative 2: Using a dockerfile
I create my image using the following docker file
FROM ai_pipeline_yh
ENV TESTVAR "A_PATH"
With this alternative the variable is set if I attach to the docker (aka if I run an interactive shell), but the output is blank if I run docker exec -it ai_pipeline_new_image bash -c "echo $TESTVAR" from the host.
What is the clean way to do this?
EDIT
Turns out that if I check the state of the variables from a shell script, they are set, but not if check them directly in bash -c "echo $VAR". I would really like to understand why this is so. I provide a minimal example:
Run docker
docker run -it --name ubuntu_env_vars ubuntu
Create a file that echoes a VAR (inside the container)
root#afdc8c494e8a:/# echo "echo \$VAR" > env_check.sh
root#afdc8c494e8a:/# chmod +x env_check.sh
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "echo $VAR"
(Blank output)
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "/env_check.sh"
output: BLA
Why???????
I revealed my noobness. Answering my own question here:
Both options, --env-file file or -env foo=bar are okay.
I forgot to escape the $ character when testing. Therefore the correct command to test if the variable exists is:
docker exec -it my_docker bash -c "echo \$MYVAR"
Is a good option design your apps driven to environment variables at the runtime(start of application)
Don't use env variables at docker build stage.
Sometimes the problem is not the environment variables or docker, the problem is the app who reads the environment variables.
Anyway, you have these options to inject environment variables to a docker container at runtime:
-e foo=bar
This is the most basic way:
docker run -it -e foo=bar ubuntu
These variables will be available since the start of your container.
remote variables
If you need to pass several variables, using the -e will not be the best way.
Or if you don't want to use .env files or any kind of local file with variables, you should:
prepare your app to read environment variables
inject the variables in a docker entrypoint bash script, reading it from a remote variables manager
in the shell script you will get the remote variables and inject them using source /foo/bar/variables. A sample here
With this approach you will have a variables manager for all of your containers. These variables manager has these features:
login
create applications
create variables by application
create global variables if a variable is required for two or more apps
expose an http endpoint to be used in the client (apps) in order to get the variables
crypt
etc
You have these options:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Configurator (I'm the author)
Let me know if you want to use this approach to help you.

Azure WebApp for Containers replace docker run

I'm currently developing an Azure webapp from a container. I want to rewrite the initial docker run that Azure is doing because my container needs some environment variables.
So, I tried different ways but nothing works. For example, if I set my variables inside the 'Startup file' field in the container settings it will append the content in the original docker run, like it is explained here: StartupFIle in webapp for container
Something like this
docker run -d -p 5920:80 --name test -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=test -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=test.net -e WEBSITE_INSTANCE_ID=0 test/myImage:latest -e DB_HOST=test.com:3306 -e DB_DATABASE=test -e DB_USERNAME=test -e DB_PASSWORD=test -e APP_URL=https://test.com
Obviously won't work.
I tried to enter into the app using FTPS but I can't find the .env file and cannot connect to the container via ssh because it continues to fail.
So, my question is: How can I upload the initial docker run command that azure container is doing?
I added all my environment variables in the app settings and I can see them in kudu, but I0m missing a step.
Thanks for your help
I resolve my problem using docker compose in the container settings.

Running a docker logs command in background remotely over ssh

We have a use case where the docker remote execution is seperately executed on another server.
so users login to server A and submt an ssh command which runs a script on remote server B.
The script performs several docker commands like prune,build,run which are working fine.
I have this command at the end of the script which is supposed to write the docker logs in background to an efs file system which is mounted on both
servers A and B.This way users can access the logfile from server A without actually logging into server B (to prevent access to containers).
I have tried all available solutions related to this and nothing seems to be working for running a process in background remotely.
Any help is greatly appreciated.
The below code is the script on remote server.user calls this script from server A over ssh like ssh id#serverB-IP docker_script.sh
loc=${args[0]}
cd $loc
# Set parameters
imagename=${args[1]}
port=${args[2]}
desired_port=${args[3]}
docker stop $imagename && sleep 10 || true
docker build -t $imagename $loc |& tee build.log
docker system prune -f
port_config=$([[ -z "${port}" ]] && echo '' || echo -p $desired_port:$port)
docker run -d --rm --name $imagename $port_config $imagename
sleep 10
docker_gen_log $loc $imagename
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
}
docker_gen_log(){
loc=${args[0]}
cd $loc
imagename=${args[1]}
docker logs -f $imagename &> run.log &
}
If you're only running a single container like you show, you can just run it in the foreground. The container logs will go to the docker run stdout/stderr and you can collect them normally.
docker run --rm --name "$imagename" $port_config "$imagename" \
> "$loc/run.log" 2>&1
# also note no `-d` option
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
If you have multiple containers, or if you want the container to keep running if the script disconnects, you can collect logs at any point until the container is removed. For this sort of setup you'd want to explicitly docker rm the container.
docker run -d --name "$imagename" $port_config "$imagename"
# note, no --rm option
docker wait "$imagename" # actually container name
docker logs "$imagename" >"$loc/run.log" 2>&1
docker rm "$imagename"
This latter approach won't give you incremental logs while the container is running. Given that your script seems to assume the container will be finished within 10 seconds that's probably not a concern for you.

How can I use the internal ip address of a container as an environment variable in Docker

I'm trying to get the IP address of my docker container as an environment variable within the container. Here is what I've tried:
When starting the container
docker run -dPi -e ip=`hostname -i` myDockerImage
When the container is already booted up
docker exec -it myDockerImage bash -c "export ip=`hostname -i`"
The problem with these two methods is that it uses the ip address of the host running the commands, not the docker container it's being run on.
So then I created a script inside the docker container that looks like this:
#!/bin/bash
export ip=`hostname -i`
echo $ip
And then run this with
docker exec -it myDockerImage bash -c ". ipVariableScript.sh"
When I add my_cmd which in my case is bash to the end of the script, it works in that one session of bash. I can't use it later in the files I need it in. I need to set it as an environment variable, not as a variable for one session.
So I already sourced it with the '.'. But it still won't echo when I'm in the container. If I put an echo $ip in the script, it will give me the correct IP address. But can only be used from within the script it's being set in.
Service names in Docker are more reliable and easier to use. However, here's
How to assign Docker guest IP to environment var inside guest
$ docker run -it ubuntu bash -c 'IP=$(hostname -i); echo ip=$IP'
ip=172.17.0.76
So, this is an old question but I ended up with the same question yesterday and my solution is this: use the docker internal option.
My containers were working fine but at some point the ip changed and I needed to change it on my docker-compose. Of course I can use the "docker network inspect my-container_default" and get my internal IP from that, but this also means changing my docker-compose every time the ip changes (and I'm still not that familiar with docker in order to detect IP changes automatically or make a more sofisticated config). So, I use the "host.docker.internal" flag. Now I no more need to check what's my IP from docker and everything is always connected.
Here an example of a node app which uses elastic search and needs to connect.
version: '3.7'
services:
api:
...configs...
depends_on:
- 'elasticsearch'
volumes:
- ./:/usr/local/api
ports:
- '3000:80'
links:
- elasticsearch:els
environment:
- PORT=80
- ELASTIC_NODE=http://host.docker.internal:9200
elasticsearch:
container_name: 'els'
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
...elastic search container configs...
ports:
- '9200:9200'
expose:
- 9200
networks:
- elastic
networks:
elastic:
driver: bridge
Note the "ELASTIC_NODE=http://host.docker.internal:9200" on api environments and the "network" that the elastic search container is using (on bridge mode)
This way you don't need to worry about knowing your IP.
The container name is postgres in this example. It is a bit clumsy, but it delivers.
container_ip=$(docker inspect postgres -f "{{json .NetworkSettings.Networks }}" \
| awk -v FS=: '{print $9}' \
| cut -f1 -d\, \
| echo "${container_ip//\"}")
Make a function out of it:
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -eu -o pipefail
#set -x
#trap read debug
#assign container ip address to variable
function get_container_ip () {
container_ip=$(docker inspect "$1" -f "{{json .NetworkSettings.Networks }}" \
| awk -v FS=: '{print $9}' \
| cut -f1 -d\,)
container_ip=$(echo "${container_ip//\"}")
}
get_container_ip $1
echo "$container_ip"

Resources