I am running an R Shiny app inside a container in Azure Container Instance. Via a DevOps pipeline whenever I change the source code of my App, I recreate the container in the Build pipeline and update the Container Instance via an Azure Cli command in the Release pipline via az container create and az container restart.
After spinning it up I need to run a bash command - namely, automatically adjusting a file within the created container.
In local Docker this would be
docker exec {containerName} /bin/bash -c "echo `"var1 = \`"val1`"`" >> /home/shiny/.Renviron"
This means: run a bash command in the container to push some text into the .Renviron file within the container.
Now the Azure Container Instance says you cannot pass command arguments for az container exec: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec
How would you then in an automated build/release process in Azure go for building, releasing and configuring a container?
I do not want to set those values within the build pipeline as I want to use the same image for different staging areas setting those values accordingly.
Many thanks in advance for your help.
I am very new to azure container instances, so I may not understand your aims but it seems another way of achieving this:
I do not want to set those values within the build pipeline as I want
to use the same image for different staging areas setting those values
accordingly.
could be to modify your parameter values upon container creation using the --command-line flag mentioned here. Something like
az container create -g MyResourceGroup --name myapp --image myimage:latest --command-line "/bin/sh -c '/path to/myscript.sh val1 val2'"
in which myscript.sh runs your app with the indicated values.
Related
Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.
So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.
Anyone know of different methods to achieve this?
I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.
I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using Azure ACR Quick Tasks to achieve something similar. If you're just trying to do the equivalent of docker build and docker push, Quick Tasks should work fine for you too.
For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.
First make sure you are in the Dockerfile directory and then:
Authenticate to the Azure CLI using az login
Authenticate to your ACR using az acr login --name myacr.
Replace the values accordingly and run az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .
Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under services>tasks>runs. You should see every line of the Docker build task appear there.
Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a Dedicated Agent Pool and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.
In my case, I'm using terraform to run the az acr build command and you can see the Dockerfile executes the COPY commands without any issues.
I have an Azure release pipeline that uses an Azure Web App for Containers task to deploy a docker image on an Azure App Service.
The image is specified in the form of some_image:$(Build.BuildId). The pipeline works as intended and successfully updates the App Service with the latest built of the image.
I want from an other release pipeline to execute a docker run command using that image. I've noticed that version 1 of the Docker task allows me to execute such a docker run command on a docker image (no idea why run is missing from version 2), but how can I specify the docker image? How can I get which image is the currently deployed on that App Service?
You can either use PowerShell or Shell script in the YAML pipeline. Since you already know the container registry and the image name, just use the below command to get the latest version
az acr repository show-tags -n MyRegistry --repository MyRepository --top 1 --orderby time_desc --detail
https://learn.microsoft.com/en-us/cli/azure/acr/repository?view=azure-cli-latest#az_acr_repository_show_tags
Might be too late now, but what you want to do is to get the value of LinuxFXVersion (if you're running docker on Linux) property from Azure Resource Explorer.
Using a combination of Azure PowerShell and CLI, you can have these commands to retrieve the current image running on your web app:
$webAppProperties = (az webapp config show --subscription "<subscription-id>" --resource-group "<resource-group-name>" -n "<webapp-name>") | ConvertFrom-Json
$webAppProperties.linuxFXVersion
Assuming you have the right permissions to your subscription from Azure Pipelines, you should be able to use this information for the next steps.
My goal is to run a python script, which is passed to a docker container containing source code and dependencies. This is to be run using Azure container instances (ACI). On my home machine I can do this in docker (setting ENTRYPOINT or CMD to "python") with
docker run myimage "myscript.py"
provided I spin up the container from a directory containing myscript.py.
After some reading I thought a similar thing could be achieved on azure using az container create --command-line as indicated here. My container creation would be something like
az container create \
--resource-group myResourceGroup \
--name my-container \
--image myimage:latest \
--restart-policy Never \
--command-line "python 'myscript.py'"
However the container is unable to find myscript.py. I am using the Azure Cloud Shell. I have tried spinning up the container from a directory containing myscript.py, and I have also tried adding a file share with myscript.py inside. In both cases I get the error
python3: can't open file 'myscript.py': [Errno 2] No such file or
directory
I think I simply do not understand the containers and how they interact with the host directory structure on azure. Can anyone provide some suggestions or pointers to resources?
provided I spin up the container from a directory containing
myscript.py.
From where you issue the command to start an container instance does not matter at all. The files need to be present inside your image. Or you can also mount storage into it and read from there.
I am deploying a Container group with the template https://learn.microsoft.com/en-us/azure/templates/microsoft.containerinstance/2018-10-01/containergroups
It has command parameter, but it is just a string and runs one command. I would like to run multiple commands when deploying. Is it possible?
If not, is there a way to run those commands to the container after it has been deployed, using PowerShell?
My usecase:
I need a SFTP server in Azure for customers to be able to send us data. I then poll that with a Logic App.
What I have done:
I found this template to be good for my needs, as it is easier to poll Azure Storage File Share.
https://github.com/Azure/azure-quickstart-templates/blob/master/201-aci-sftp-files
My problem is I have multiple users. Everyone needs their own username/password and their own file share or sub-directory in that share. I also can't understand how to configure multiple users through the environment variable. I tried separating them with ;. It deploys, but the server doesn't respond to requests at all.
I can deploy multiple containers, one for each user, but that doesn't sound like a good idea when the number of customers rises.
Unfortunately, it seems that you cannot run multi-command in one time. See the Restrictions of the exec command for ACI:
Azure Container Instances currently supports launching a single
process with az container exec, and you cannot pass command arguments.
For example, you cannot chain commands like in sh -c "echo FOO && echo
BAR", or execute echo FOO.
I suggest that you can run the command to create an interactive session with the container instance to execute command continuously after you create the ACI.
For Linux:
az container exec -g groupName -n containerName --exec-command "/bin/bash"
For Windows:
az container exec -g groupName -n containerName --exec-command "cmd.exe"
In general, I would start the docker instance on my local machine like docker run -t -i -e 'a=b' ...
Now, I would like to deploy and run my custom docker image which I uploaded to the Docker Container Registry before and start it like the command above - with environment variables.
Checking the Azure CLI for WebApps you can see that setting environment variables in general should be possible. But for me it seems this "environment variables" are not the environment variables which are passed to the docker command. Why? Checking the container protocol I can see how the docker container is started. There are no environment variables set.
With Azure Container, it would work like this az container create ... --environment-variables a=b. These environment variables are passed down to the container/docker. And this is exactly what I am searching for WebApps.
Does anyone have some experience in deploying Azure Webapps with customer Docker instances started with environment variables?
I guess I found the solution for the problem:
App Settings are injected into your app as environment variables at runtime.
If you need to set an environment variable for your application, simply add an App Setting in the Azure portal. When your app runs, we will inject the app setting into the process as an environment variable automatically.
How it works via CLI:
az webapp config appsettings set --name <mycontainername> --resource-group <myresourcegroupname> --settings a='b'
Setting all environment variables via CLI like the command above worked for me. The same is possible via the portal UI in app settings. If you check how Azure starts the Docker instance, you will see that none of the set environment variables are set during startup (like.docker run -d -p 3287:3000 --name <mycontainername -e a=b) but if you login to the Docker container and run an echo command for the environment variable, you will see that the environment variable has been injected.
Note: Maybe you have to restart the Docker instance in order to have the new environment variables injected.