Azure WebApp for Containers replace docker run - azure

I'm currently developing an Azure webapp from a container. I want to rewrite the initial docker run that Azure is doing because my container needs some environment variables.
So, I tried different ways but nothing works. For example, if I set my variables inside the 'Startup file' field in the container settings it will append the content in the original docker run, like it is explained here: StartupFIle in webapp for container
Something like this
docker run -d -p 5920:80 --name test -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=test -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=test.net -e WEBSITE_INSTANCE_ID=0 test/myImage:latest -e DB_HOST=test.com:3306 -e DB_DATABASE=test -e DB_USERNAME=test -e DB_PASSWORD=test -e APP_URL=https://test.com
Obviously won't work.
I tried to enter into the app using FTPS but I can't find the .env file and cannot connect to the container via ssh because it continues to fail.
So, my question is: How can I upload the initial docker run command that azure container is doing?
I added all my environment variables in the app settings and I can see them in kudu, but I0m missing a step.
Thanks for your help

I resolve my problem using docker compose in the container settings.

Related

Azure: execute a container instance with parameters

I have a container in a Azure Container Registry. I have made a container instance from the registry. This container runs a script.sh at the entry point and echo's a value.
FROM ubuntu
WORKDIR /docker
COPY . .
ENTRYPOINT ["./script.sh"]
#!/bin/bash
if [[ -z $1 ]] ; then
echo "simple task: no parameters were passed"
else
echo $1
fi
How do I execute the container and give it a different starting value ?
In docker we can just put values at the end of docker run. The container runs using the referenced image, executes the script and deletes the running container.
docker run --rm --name "simple-temp" "simple" "value1" "value1"
I want the equivalent of this command. Create and run an instance using the registry, run the entry point once, shutdown and delete container. How do I accomplish this in Azure Container Instances ? If not Container Instances, which service to use ?
You can use the Docker ACI even for Azure ACI.
https://learn.microsoft.com/en-us/azure/container-instances/quickstart-docker-cli

Setting environment variables on docker before exec

I'm running a set of commands on an ubuntu docker, and I need to set a couple of environment variables for my scripts to work.
I have tried several alternatives but none of them seem to solve my problem.
Alternative 1: Using --env or --env-file
On my already running container, I run either:
docker exec -it --env TESTVAR="some_path" ai_pipeline_new_image bash -c "echo $TESTVAR"
docker exec -it --env-file env_vars ai_pipeline_new_image bash -c "echo $TESTVAR"
The content of env_vars:
TESTVAR="some_path"
In both cases the output is empty.
Alternative 2: Using a dockerfile
I create my image using the following docker file
FROM ai_pipeline_yh
ENV TESTVAR "A_PATH"
With this alternative the variable is set if I attach to the docker (aka if I run an interactive shell), but the output is blank if I run docker exec -it ai_pipeline_new_image bash -c "echo $TESTVAR" from the host.
What is the clean way to do this?
EDIT
Turns out that if I check the state of the variables from a shell script, they are set, but not if check them directly in bash -c "echo $VAR". I would really like to understand why this is so. I provide a minimal example:
Run docker
docker run -it --name ubuntu_env_vars ubuntu
Create a file that echoes a VAR (inside the container)
root#afdc8c494e8a:/# echo "echo \$VAR" > env_check.sh
root#afdc8c494e8a:/# chmod +x env_check.sh
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "echo $VAR"
(Blank output)
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "/env_check.sh"
output: BLA
Why???????
I revealed my noobness. Answering my own question here:
Both options, --env-file file or -env foo=bar are okay.
I forgot to escape the $ character when testing. Therefore the correct command to test if the variable exists is:
docker exec -it my_docker bash -c "echo \$MYVAR"
Is a good option design your apps driven to environment variables at the runtime(start of application)
Don't use env variables at docker build stage.
Sometimes the problem is not the environment variables or docker, the problem is the app who reads the environment variables.
Anyway, you have these options to inject environment variables to a docker container at runtime:
-e foo=bar
This is the most basic way:
docker run -it -e foo=bar ubuntu
These variables will be available since the start of your container.
remote variables
If you need to pass several variables, using the -e will not be the best way.
Or if you don't want to use .env files or any kind of local file with variables, you should:
prepare your app to read environment variables
inject the variables in a docker entrypoint bash script, reading it from a remote variables manager
in the shell script you will get the remote variables and inject them using source /foo/bar/variables. A sample here
With this approach you will have a variables manager for all of your containers. These variables manager has these features:
login
create applications
create variables by application
create global variables if a variable is required for two or more apps
expose an http endpoint to be used in the client (apps) in order to get the variables
crypt
etc
You have these options:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Configurator (I'm the author)
Let me know if you want to use this approach to help you.

Running a docker logs command in background remotely over ssh

We have a use case where the docker remote execution is seperately executed on another server.
so users login to server A and submt an ssh command which runs a script on remote server B.
The script performs several docker commands like prune,build,run which are working fine.
I have this command at the end of the script which is supposed to write the docker logs in background to an efs file system which is mounted on both
servers A and B.This way users can access the logfile from server A without actually logging into server B (to prevent access to containers).
I have tried all available solutions related to this and nothing seems to be working for running a process in background remotely.
Any help is greatly appreciated.
The below code is the script on remote server.user calls this script from server A over ssh like ssh id#serverB-IP docker_script.sh
loc=${args[0]}
cd $loc
# Set parameters
imagename=${args[1]}
port=${args[2]}
desired_port=${args[3]}
docker stop $imagename && sleep 10 || true
docker build -t $imagename $loc |& tee build.log
docker system prune -f
port_config=$([[ -z "${port}" ]] && echo '' || echo -p $desired_port:$port)
docker run -d --rm --name $imagename $port_config $imagename
sleep 10
docker_gen_log $loc $imagename
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
}
docker_gen_log(){
loc=${args[0]}
cd $loc
imagename=${args[1]}
docker logs -f $imagename &> run.log &
}
If you're only running a single container like you show, you can just run it in the foreground. The container logs will go to the docker run stdout/stderr and you can collect them normally.
docker run --rm --name "$imagename" $port_config "$imagename" \
> "$loc/run.log" 2>&1
# also note no `-d` option
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
If you have multiple containers, or if you want the container to keep running if the script disconnects, you can collect logs at any point until the container is removed. For this sort of setup you'd want to explicitly docker rm the container.
docker run -d --name "$imagename" $port_config "$imagename"
# note, no --rm option
docker wait "$imagename" # actually container name
docker logs "$imagename" >"$loc/run.log" 2>&1
docker rm "$imagename"
This latter approach won't give you incremental logs while the container is running. Given that your script seems to assume the container will be finished within 10 seconds that's probably not a concern for you.

Azure ACI container deployment fails when launching with Command line arguments

I am trying to launch an ACI container from Azure CLI. The deployment fails when I send multiple commands from command-line and it succeeds when I just pass one command, like 'ls'.
Am I passing multiple arguments to the command line in the wrong way?
az container create --resource-group rg-***-Prod-Whse-Containers --name test--image alpine:3.5 --command-line "apt-get update && apt-get install -y && wget wget https://www.google.com" --restart-policy never --vnet vnet-**-eastus2 --subnet **-ACI
Unfortunately, it seems that you cannot run multi-command in one time. See the Restrictions of the exec command for ACI:
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
You just can execute the command such as whoami, ls and etc in the CLI command.
I suggest that you can run the command to create an interactive session with the container instance to execute command continuously after you create the ACI, refer to this similar issue.

Azure WebApps for Containers and AppSettings/Environment Variables

I have a web app for containers running linux. Im running a docker container. It all works but I wanted to add an environment variable as follows:
docker run -e my_app_setting_var=theValue
The documentation says that app setting will be automatically added as -e environment variables here:
App Settings are injected into your app as environment variables at runtime
But as you can see from my logs it doesnt get added: (some stuff stripped out)
docker run -d -p 30174:5000 --name thename -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=5000 -e WEBSITE_SITE_NAME=the_website_name -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=the_role_id -e HTTP_LOGGING_ENABLED=1 acr_where_image_is.azurecr.io/image_name:latest Dockerfile
I would expect to see an environment variable like this:
docker run -d -p 30174:5000 --name thename -e my_app_setting_var=theValue
Any ideas?
Cheers
The Azure Web App for Container will inject the environment variables that you set in the portal or through the Azure CLI. But unfortunately, you will not see your environment variables in the logs. The logs will show you a little of the variables. You can just confirm with echo the variables in the tool just like Kudu, and you can see them in the environment.

Resources