Azure App Service - How to use environment variables in docker container? - azure

I'm using an Azure App Service to run my docker image. Running my docker container requires using a couple of environment variables. Locally, when I run my container I just do something like:
docker run -e CREDENTIAL -e USERNAME myapp
However, in the Azure App Service, after defining CREDENTIAL and USERNAME as Application Settings, I'm unsure how to pass these to the container. I see from the logs that on startup Azure passes some of its own environment variables, but if I add a startup command with my environment variables, it tacks it on at the end of the one generated by Azure creating an invalid command. How can I pass mine to the container?

As I understand you want to set environment variables in that docker container with -e option.
You don't need to use startup command for that. Pass these variables as application settings:
Application Settings are exposed as environment variables for access by your application at runtime.
Documentation

Related

Accessing aws credentials set as env variables in docker run command

Is there a way for AWS credentials passed as environment variables to the docker run command to be put to use for getting the caller identity details while the container is running?
This is the docker run command being executed in the application
docker run -e AWS_ACCESS_KEY={user_credentials["AccessKeyId"]} -e AWS_SECRET_ACCESS_KEY={user_credentials["SecretAccessKey"]} -e AWS_SESSION_TOKEN={user_credentials["SessionToken"]} image_name --rm'
The answer is actually simple, but definitely something I was not aware of.
Initialized an STS client with the given credentials and then made a call to to get the caller identity details. Retrieved the credentials using the OS module. The scope of my application is very limited, hence using the credentials to get the user account details. This is what worked for me.
sts_client = boto3.client('sts', aws_access_key_id=os.environ['AccessKeyId'],
aws_secret_access_key=os.environ['SecretAccessKey'],
aws_session_token=os.environ['SessionToken'])

Is it possible to initialize .net core non-pcf deployed application with values from Config Server

I have a .net core app, hosted on PCF. Also I have Config Server installed.
I want to run locally with iis express this application and load same config values as it will have when deployed to pcf, and I do not want to deploy it to Pcf Dev as I want to debug it.
Is it possible? The only workaround I have is to fetch all variables into User managed secrets, but it's awful.
Steeltoe and SCS Client look at the VCAP_SERVICES environment variable to load the configuration they use to talk with Config Server. On PCF, this environment variable is automatically populated with information based on the services that you bind to your app.
I do not know of any tool to manage/bind services locally, but you can always set environment variables manually. If you were to run cf env <app> for an app that is bound to your Config Server, it will list the contents of the VCAP_SERVICES env variable. Copy that output, paste it into an environment variable on your local machine. Fire up your app and Steeltoe or SCS Client should pick that information up automatically.
Hope that helps!
If you don't want to connect to the exact same config server, you can run the config server locally with Java or Docker and point it at the same back-end. The Steeltoe docs include instructions for running the config server with Maven and the Music Store sample includes cmd and sh scripts that show running a config server via Docker, though they may be slightly out of date. The most recent way I've run the docker command is something like this:
docker run --rm -ti -p 8888:8888 -v $PWD/config-repo:/config --name steeltoe-config steeltoeoss/configserver --spring.profiles.active=native
from a location that contains a folder named config-repo with the relevant config files in that location.

Deploy Azure Webapp with custom container environment variables

In general, I would start the docker instance on my local machine like docker run -t -i -e 'a=b' ...
Now, I would like to deploy and run my custom docker image which I uploaded to the Docker Container Registry before and start it like the command above - with environment variables.
Checking the Azure CLI for WebApps you can see that setting environment variables in general should be possible. But for me it seems this "environment variables" are not the environment variables which are passed to the docker command. Why? Checking the container protocol I can see how the docker container is started. There are no environment variables set.
With Azure Container, it would work like this az container create ... --environment-variables a=b. These environment variables are passed down to the container/docker. And this is exactly what I am searching for WebApps.
Does anyone have some experience in deploying Azure Webapps with customer Docker instances started with environment variables?
I guess I found the solution for the problem:
App Settings are injected into your app as environment variables at runtime.
If you need to set an environment variable for your application, simply add an App Setting in the Azure portal. When your app runs, we will inject the app setting into the process as an environment variable automatically.
How it works via CLI:
az webapp config appsettings set --name <mycontainername> --resource-group <myresourcegroupname> --settings a='b'
Setting all environment variables via CLI like the command above worked for me. The same is possible via the portal UI in app settings. If you check how Azure starts the Docker instance, you will see that none of the set environment variables are set during startup (like.docker run -d -p 3287:3000 --name <mycontainername -e a=b) but if you login to the Docker container and run an echo command for the environment variable, you will see that the environment variable has been injected.
Note: Maybe you have to restart the Docker instance in order to have the new environment variables injected.

Login to azure container

I used following quick start doc to spin up my first Azure container.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart#feedback
It worked fine. but how do I connect to container if I want to debug something?
You cannot connect to the container itself directly to debug, IE you can't SSH or RDP to it. Take a look at this graphic which highlights how a container differs from virtual machines:
You can however pull logs from your container from the container engine. In your case you would want to use the following command in the Azure CLI: az container logs.
https://aka.ms/container_logs
When you invoke CLI through the Portal, you should already be connected through your subscription.To debug or troubleshoot you can look at the container logs. Check out this documentation for the exact commands
https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-logs
When I am building containers to run on ACI, I build them first in a local docker instance where they can be connected to and interactively debugged. When you're happy with how they run locally push them into ACI, and debug from the output logs if needed.
I get to the bash shell in my Azure containers by either the azure-cli package, as the OP noted in a comment:
az container exec --exec-command "/bin/bash"
Or by navigating to a container instance in the Azure portal, then under Settings/Containers there is a "Connect" tab:

Azure Docker Container - how to pass startup commands to a docker run?

Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py

Resources