Azure ACI container deployment fails when launching with Command line arguments - azure

I am trying to launch an ACI container from Azure CLI. The deployment fails when I send multiple commands from command-line and it succeeds when I just pass one command, like 'ls'.
Am I passing multiple arguments to the command line in the wrong way?
az container create --resource-group rg-***-Prod-Whse-Containers --name test--image alpine:3.5 --command-line "apt-get update && apt-get install -y && wget wget https://www.google.com" --restart-policy never --vnet vnet-**-eastus2 --subnet **-ACI

Unfortunately, it seems that you cannot run multi-command in one time. See the Restrictions of the exec command for ACI:
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
You just can execute the command such as whoami, ls and etc in the CLI command.
I suggest that you can run the command to create an interactive session with the container instance to execute command continuously after you create the ACI, refer to this similar issue.

Related

Can an Azure Machine Learning Compute Instance shut down itself automatically with a bash script executed by crontab?

I have a compute instance that starts at 12:00 with the scheduler of Azure ML and does a job scheduled in the crontab of the CI at 12:10. The thing is that this job doesn't always takes the same time to finish. So i want the CI to shut down itself when done.
The script that the crontab executes is the following:
---------------------------------------------------------
#!/bin/bash
...
# CRREATE FOLDER FOR LOGS
foldername=$PROJECT_PATH/$(date '+%d_%m_%Y_%H_%M_%S')
mkdir $foldername
filename=az_login.txt
path=$foldername/$filename
touch $path
az login -u *<USERNAME>* -p *<PASSWORD>* > $path
filename=acr_login.txt
path=$foldername/$filename
touch $path
# Authenticate to ACR
az acr login --name $ACR_NAME > $path
filename=pull_container.txt
path=$foldername/$filename
touch $path
# Pull the container image from ACR
docker pull $ACR_NAME.azurecr.io/$IMAGE_NAME:$IMAGE_TAG > $path
filename=run.txt
path=$foldername/$filename
touch $path
# Run the container image
docker run -v $CREDENTIALS_PATH:/app/config_privilegies $ACR_NAME.azurecr.io/$IMAGE_NAME:$IMAGE_TAG > $path
filename=rm_container.txt
path=$foldername/$filename
touch $path
# Delete the exited containers
docker rm $(docker ps -a -q --filter "status=exited") > $path
az ml compute stop --name *<CI_NAME>* --resource-group *<RESOURCE_NAME>* --workspace-name *<WORKSPACE_NAME>* --subscription *<SUBSCRIPTION_NAME>*
Everything works great until the stop command. In this particular code, it does nothing.
I've tried to put the last command in a seperate bash script and changing the last line for "./close_ci.sh". However, this doesn't work either, it restarts the CI instead of stopping it.

Setting environment variables on docker before exec

I'm running a set of commands on an ubuntu docker, and I need to set a couple of environment variables for my scripts to work.
I have tried several alternatives but none of them seem to solve my problem.
Alternative 1: Using --env or --env-file
On my already running container, I run either:
docker exec -it --env TESTVAR="some_path" ai_pipeline_new_image bash -c "echo $TESTVAR"
docker exec -it --env-file env_vars ai_pipeline_new_image bash -c "echo $TESTVAR"
The content of env_vars:
TESTVAR="some_path"
In both cases the output is empty.
Alternative 2: Using a dockerfile
I create my image using the following docker file
FROM ai_pipeline_yh
ENV TESTVAR "A_PATH"
With this alternative the variable is set if I attach to the docker (aka if I run an interactive shell), but the output is blank if I run docker exec -it ai_pipeline_new_image bash -c "echo $TESTVAR" from the host.
What is the clean way to do this?
EDIT
Turns out that if I check the state of the variables from a shell script, they are set, but not if check them directly in bash -c "echo $VAR". I would really like to understand why this is so. I provide a minimal example:
Run docker
docker run -it --name ubuntu_env_vars ubuntu
Create a file that echoes a VAR (inside the container)
root#afdc8c494e8a:/# echo "echo \$VAR" > env_check.sh
root#afdc8c494e8a:/# chmod +x env_check.sh
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "echo $VAR"
(Blank output)
From the host, run:
docker exec -it -e VAR=BLA ubuntu_env_vars bash -c "/env_check.sh"
output: BLA
Why???????
I revealed my noobness. Answering my own question here:
Both options, --env-file file or -env foo=bar are okay.
I forgot to escape the $ character when testing. Therefore the correct command to test if the variable exists is:
docker exec -it my_docker bash -c "echo \$MYVAR"
Is a good option design your apps driven to environment variables at the runtime(start of application)
Don't use env variables at docker build stage.
Sometimes the problem is not the environment variables or docker, the problem is the app who reads the environment variables.
Anyway, you have these options to inject environment variables to a docker container at runtime:
-e foo=bar
This is the most basic way:
docker run -it -e foo=bar ubuntu
These variables will be available since the start of your container.
remote variables
If you need to pass several variables, using the -e will not be the best way.
Or if you don't want to use .env files or any kind of local file with variables, you should:
prepare your app to read environment variables
inject the variables in a docker entrypoint bash script, reading it from a remote variables manager
in the shell script you will get the remote variables and inject them using source /foo/bar/variables. A sample here
With this approach you will have a variables manager for all of your containers. These variables manager has these features:
login
create applications
create variables by application
create global variables if a variable is required for two or more apps
expose an http endpoint to be used in the client (apps) in order to get the variables
crypt
etc
You have these options:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Configurator (I'm the author)
Let me know if you want to use this approach to help you.

Running a docker logs command in background remotely over ssh

We have a use case where the docker remote execution is seperately executed on another server.
so users login to server A and submt an ssh command which runs a script on remote server B.
The script performs several docker commands like prune,build,run which are working fine.
I have this command at the end of the script which is supposed to write the docker logs in background to an efs file system which is mounted on both
servers A and B.This way users can access the logfile from server A without actually logging into server B (to prevent access to containers).
I have tried all available solutions related to this and nothing seems to be working for running a process in background remotely.
Any help is greatly appreciated.
The below code is the script on remote server.user calls this script from server A over ssh like ssh id#serverB-IP docker_script.sh
loc=${args[0]}
cd $loc
# Set parameters
imagename=${args[1]}
port=${args[2]}
desired_port=${args[3]}
docker stop $imagename && sleep 10 || true
docker build -t $imagename $loc |& tee build.log
docker system prune -f
port_config=$([[ -z "${port}" ]] && echo '' || echo -p $desired_port:$port)
docker run -d --rm --name $imagename $port_config $imagename
sleep 10
docker_gen_log $loc $imagename
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
}
docker_gen_log(){
loc=${args[0]}
cd $loc
imagename=${args[1]}
docker logs -f $imagename &> run.log &
}
If you're only running a single container like you show, you can just run it in the foreground. The container logs will go to the docker run stdout/stderr and you can collect them normally.
docker run --rm --name "$imagename" $port_config "$imagename" \
> "$loc/run.log" 2>&1
# also note no `-d` option
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
If you have multiple containers, or if you want the container to keep running if the script disconnects, you can collect logs at any point until the container is removed. For this sort of setup you'd want to explicitly docker rm the container.
docker run -d --name "$imagename" $port_config "$imagename"
# note, no --rm option
docker wait "$imagename" # actually container name
docker logs "$imagename" >"$loc/run.log" 2>&1
docker rm "$imagename"
This latter approach won't give you incremental logs while the container is running. Given that your script seems to assume the container will be finished within 10 seconds that's probably not a concern for you.

How to execute linux commands in the shell opened through /bin/bash

I am new to Linux stuff and would like to know how to run command while opening the shell through /bin/bash?
Eg the steps that I want to perform:
Step1: Run the docker exec command to start the quickstart virtual machine.
$ docker exec -it 7f8c1a16e5b2 /bin/bash
Step2: The above command gives the handle of the quickstart vm on the console. Now I want to run the below command by default when ever some one starts the docker quickstart console (step 1)
cd
. ./.bash_profile
I need some guidance on how to do this. Obviously, putting all these statements in one shell script isn't helping as the commands of Step2 are to be executed in the newly opened shell (of quickstart vm). The idea is to put all these statements in a single shell script and execute it when we want to get hold of the session within the VM console.
You can pass the commands you want to be executed inside the container to bash with the -c option.
That would look something like this:
docker exec -it 7f8c1a16e5b2 /bin/bash -c "cd && . ./.bash_profile && /bin/bash"

How to send bamboo variables from Bamboo script to docker container?

I'm using Docker plugin for bamboo and I need to execute a script in the docker container.
The sh script contains:
echo \"ini_source_path\": \"${bamboo.ini_source_path}\",
and if I put this line directly in Container Command, the ${bamboo.ini_source_path} will be replaced with value of this variable.
The problem in when I put /bin/bashscript.sh in Container Command because I'm getting a error:
script.sh: line 35: \"${bamboo.ini_source_path}\",: bad substitution
Is there a way I can reach bamboo.ini_source_path variable from my script in docker container?
Thanks!
What version of Bamboo are you using? This problem was fixed in Bamboo 6.1.0:
Unable to use variables in Container name field in Run docker task
Workaround:
Create a Script Task that runs before the Docker Task.
Run commands like
echo "export sourcepath=$ini_source_path" > scriptname.sh
chmod +x scriptname.sh
The Docker Task will be map the ${bamboo.working.directory} to the Docker \data volume.
So the just created scriptname.sh script is available in the Docker container.The script will be executed, and will set the variable correctly.

Resources