Deploying Azure container s without running them - azure

I am new to running containers in Azure and I am puzzled with the use case below,
If you feel I am wrongly using containers, feel free to give me an alternative approach.
Core problem: I am not able/ don't know how to create a container instance in the "stopped" state either via command line or ARM template deployment
Long read use case:
I created a docker image that runs a python job.
The job needs to run daily and is triggered via a data factory. The data factory will figure out the environment, set up the docker commands, update the container image and then execute the container via a batch account. The job itself does an api call and writes some data to sql. This part works fine, the container status goes to running and stops afterwards (I put auto-restart off)
In Azure DevOps, I have a pipeline that builds the image of the job and stores it in an azure repo. this works fine.
As I need a container instance as a resource in my resource group, I decided to put them in my infra ARM template. The problem: When deploying the container using DevOps / Arm template deployment,
It deploys and runs the job instance and this is not great, I would like to have the container created in a "stopped" state. The reason for this is that the job otherwise writes data to our database, and that is unwanted.
I am wondering what would be the best approach / what the best guidelines are, I thought about these two scenarios, but for both, my intuition says no.
scenario 1: have an all-time running container (let it execute bin/bash) and deliver the command using "az container exec".
Why I don't want to do this: I currently have an image per environment that has the exact job for that environment and I don't see the usefulness of having 3 standby containers running all the time to be triggered once a day
scenario 2: Instead of handling the container instance creation via DevOps, ignore the problem and create it using data-factory and the batch account. This implies that, when the job is triggered, It will create (and therefore run the container). Subsequently, I could delete it after usage.
Why I don't want to do this: I see a container instance as part of the infrastructure (as it is something you deploy inside a resource group, correct me if my point of view is wrong) So in that sense, managing resources via a scheduled data factory job doesn't look good and is a kind of hack to overcome the problem that you cannot deploy a container instance in a stopped state.
# set base image (host OS)
FROM python:3.7-buster
# Argument for environment selection
ARG environment
ENV environment ${environment}
# set the working directory in the container
WORKDIR /
# copy the dependencies file to the working directory
COPY requirements.txt .
# install FreeTDS and dependencies
RUN apt-get update && apt-get -y install apt-transport-https curl
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN exit
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get -y install msodbcsql17
RUN apt-get install unixodbc-dev
RUN pip install pyodbc
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY /src .
# command to run on container start
CMD python "./my_data_job.py" "./my_config.ini" ${environment}

For Azure Container Instance, the container group will always be in the running state until you stop it. But containers inside it can be in the terminated state. In fact, if your image is a one-time image, then the container will be in the terminated state when the job is finished. You can use the CLI command az container exec to run the job again as you need.
So it's impossible to create an ACI in the stopped state. Maybe you can use the AKS, create different deployments for different environments, and when you need a container to run the job, then scale up to one replica. When you don't need the container, you can scale down to zero.

Related

After installing docker I am unable to run commands that I used to be able to run

Two examples include snap and certbot. I used to type sudo certbot and would be able to add ssl certs to my nginx servers. Now I get this every time I enter certbot. The same thing goes for snap. I'm new to docker and don't understand what is going on. Can somebody explain what is ging on?
Usage: docker compose [OPTIONS] COMMAND
Docker Compose
Options:
--ansi string Control when to print ANSI control characters ("never"|"always"|"auto") (default "auto")
--compatibility Run compose in backward compatibility mode
--env-file string Specify an alternate environment file.
-f, --file stringArray Compose configuration files
--profile stringArray Specify a profile to enable
--project-directory string Specify an alternate working directory
(default: the path of the, first specified, Compose file)
-p, --project-name string Project name
Commands:
build Build or rebuild services
convert Converts the compose file to platform's canonical format
cp Copy files/folders between a service container and the local filesystem
create Creates containers for a service.
down Stop and remove containers, networks
events Receive real time events from containers.
exec Execute a command in a running container.
images List images used by the created containers
kill Force stop service containers.
logs View output from containers
ls List running compose projects
pause Pause services
port Print the public port for a port binding.
ps List containers
pull Pull service images
push Push service images
restart Restart service containers
rm Removes stopped service containers
run Run a one-off command on a service.
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker Compose version information
Run 'docker compose COMMAND --help' for more information on a command.
NEVER INSTALL DOCKER WITH SNAP
I solved the problems. Not sure where everything went wrong, but I completely destroyed snapd from my system following this https://askubuntu.com/questions/1280707/how-to-uninstall-snap. Then I installed snap again and everything works.
INSTALL DOCKER WITH THE OFFICIAL GUIDE (APT)
Go here to install docker the correct way. https://docs.docker.com/engine/install/ubuntu/
If you are new to docker follow this advice and NEVER TYPE snap install docker into you terminal. Follow these words of wisdom or use the first half if you already messed up.

Azure ACR Tasks API? Have an application running in docker container that needs to to build and push images to ACR

Application was using docker CLI to build and then push an image to azure container registry. Used to work fine on Kubernetes using a python module and docker.sock. But since cluster upgraded docker daemon is gone. Guessing the K8 backend no longer uses docker or has it installled. Also, since docker is going away in kubernetes (i think it said 1.24 I want to get away from counting on docker for the build.
So the application when working was python application running in a docker container. It would take the dockerfile and build it and push it to azure container registry. There are files that get pushed into the image via the dockerfile and they all exist in the same directory as the dockerfile.
Anyone know of different methods to achieve this?
I've been looking at Azure ACR Tasks but I'm not really sure how all the files get copied over to a task and have not been able to find any examples.
I can confirm that running an Azure ACR Task (Multi-Task or Quick Task) will copy the files over when the command is executed. We're using Azure ACR Quick Tasks to achieve something similar. If you're just trying to do the equivalent of docker build and docker push, Quick Tasks should work fine for you too.
For simplicity I'm gonna list the example for a Quick Task because that's what I've used mostly. Try the following steps from your local machine to see how it works. Same steps should also work from any other environment provided the machine is authenticated properly.
First make sure you are in the Dockerfile directory and then:
Authenticate to the Azure CLI using az login
Authenticate to your ACR using az acr login --name myacr.
Replace the values accordingly and run az acr build --registry myacr -g myacr_rg --image myacr.azurecr.io/myimage:v1.0 .
Your terminal should already show all of the steps that the Dockerfile is executing. Alternatively you can head over to your ACR and look under services>tasks>runs. You should see every line of the Docker build task appear there.
Note: If you're running this task in an automated fashion and also require access to internal/private resources during the image build, you should consider creating a Dedicated Agent Pool and deploying it in your VNET/SNET, instead of using the shared/public Agent Pools.
In my case, I'm using terraform to run the az acr build command and you can see the Dockerfile executes the COPY commands without any issues.

How to deploy ubuntu container to Azure Container Instance and keep it running

I cannot manage to deploy 'ubuntu' to Azure Container instance without it becoming "Terminated" right after the deployment. I tried setting the command to ["/bin/bash"], however, it doesn't stop the container from terminating.
It's a common issue you can see. The docker image ubuntu just provides the base container, but no application runs in it to make the Container Instance in the running state. So you need to add the command in the command line to make the container instance in the running state. For example, add the command tail -f /dev/null.
When you do it in the portal, it should look like this:
It just keeps the container in the running state and does not output anything. So there are no logs output.

Azure container instance run bash command

I am running an R Shiny app inside a container in Azure Container Instance. Via a DevOps pipeline whenever I change the source code of my App, I recreate the container in the Build pipeline and update the Container Instance via an Azure Cli command in the Release pipline via az container create and az container restart.
After spinning it up I need to run a bash command - namely, automatically adjusting a file within the created container.
In local Docker this would be
docker exec {containerName} /bin/bash -c "echo `"var1 = \`"val1`"`" >> /home/shiny/.Renviron"
This means: run a bash command in the container to push some text into the .Renviron file within the container.
Now the Azure Container Instance says you cannot pass command arguments for az container exec: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec
How would you then in an automated build/release process in Azure go for building, releasing and configuring a container?
I do not want to set those values within the build pipeline as I want to use the same image for different staging areas setting those values accordingly.
Many thanks in advance for your help.
I am very new to azure container instances, so I may not understand your aims but it seems another way of achieving this:
I do not want to set those values within the build pipeline as I want
to use the same image for different staging areas setting those values
accordingly.
could be to modify your parameter values upon container creation using the --command-line flag mentioned here. Something like
az container create -g MyResourceGroup --name myapp --image myimage:latest --command-line "/bin/sh -c '/path to/myscript.sh val1 val2'"
in which myscript.sh runs your app with the indicated values.

Azure Docker Container - how to pass startup commands to a docker run?

Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py

Resources