I have a container in a Azure Container Registry. I have made a container instance from the registry. This container runs a script.sh at the entry point and echo's a value.
FROM ubuntu
WORKDIR /docker
COPY . .
ENTRYPOINT ["./script.sh"]
#!/bin/bash
if [[ -z $1 ]] ; then
echo "simple task: no parameters were passed"
else
echo $1
fi
How do I execute the container and give it a different starting value ?
In docker we can just put values at the end of docker run. The container runs using the referenced image, executes the script and deletes the running container.
docker run --rm --name "simple-temp" "simple" "value1" "value1"
I want the equivalent of this command. Create and run an instance using the registry, run the entry point once, shutdown and delete container. How do I accomplish this in Azure Container Instances ? If not Container Instances, which service to use ?
You can use the Docker ACI even for Azure ACI.
https://learn.microsoft.com/en-us/azure/container-instances/quickstart-docker-cli
Related
I'm currently developing an Azure webapp from a container. I want to rewrite the initial docker run that Azure is doing because my container needs some environment variables.
So, I tried different ways but nothing works. For example, if I set my variables inside the 'Startup file' field in the container settings it will append the content in the original docker run, like it is explained here: StartupFIle in webapp for container
Something like this
docker run -d -p 5920:80 --name test -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=test -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=test.net -e WEBSITE_INSTANCE_ID=0 test/myImage:latest -e DB_HOST=test.com:3306 -e DB_DATABASE=test -e DB_USERNAME=test -e DB_PASSWORD=test -e APP_URL=https://test.com
Obviously won't work.
I tried to enter into the app using FTPS but I can't find the .env file and cannot connect to the container via ssh because it continues to fail.
So, my question is: How can I upload the initial docker run command that azure container is doing?
I added all my environment variables in the app settings and I can see them in kudu, but I0m missing a step.
Thanks for your help
I resolve my problem using docker compose in the container settings.
We have a use case where the docker remote execution is seperately executed on another server.
so users login to server A and submt an ssh command which runs a script on remote server B.
The script performs several docker commands like prune,build,run which are working fine.
I have this command at the end of the script which is supposed to write the docker logs in background to an efs file system which is mounted on both
servers A and B.This way users can access the logfile from server A without actually logging into server B (to prevent access to containers).
I have tried all available solutions related to this and nothing seems to be working for running a process in background remotely.
Any help is greatly appreciated.
The below code is the script on remote server.user calls this script from server A over ssh like ssh id#serverB-IP docker_script.sh
loc=${args[0]}
cd $loc
# Set parameters
imagename=${args[1]}
port=${args[2]}
desired_port=${args[3]}
docker stop $imagename && sleep 10 || true
docker build -t $imagename $loc |& tee build.log
docker system prune -f
port_config=$([[ -z "${port}" ]] && echo '' || echo -p $desired_port:$port)
docker run -d --rm --name $imagename $port_config $imagename
sleep 10
docker_gen_log $loc $imagename
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
}
docker_gen_log(){
loc=${args[0]}
cd $loc
imagename=${args[1]}
docker logs -f $imagename &> run.log &
}
If you're only running a single container like you show, you can just run it in the foreground. The container logs will go to the docker run stdout/stderr and you can collect them normally.
docker run --rm --name "$imagename" $port_config "$imagename" \
> "$loc/run.log" 2>&1
# also note no `-d` option
echo ""
echo "Docker build and run are now complete ....Please refer to the logs under $loc/build.log $loc/run.log"
If you have multiple containers, or if you want the container to keep running if the script disconnects, you can collect logs at any point until the container is removed. For this sort of setup you'd want to explicitly docker rm the container.
docker run -d --name "$imagename" $port_config "$imagename"
# note, no --rm option
docker wait "$imagename" # actually container name
docker logs "$imagename" >"$loc/run.log" 2>&1
docker rm "$imagename"
This latter approach won't give you incremental logs while the container is running. Given that your script seems to assume the container will be finished within 10 seconds that's probably not a concern for you.
I am working with 2 docker container, container 1: when running it asks user for input for example “what’s your name”, store it. Container 2: takes the user input from container 1 and echo’s “hello
test.sh:
#!/bin/bash
echo “What’s your name?”
read -p “Enter your name: “ username
echo “Hello $username!”
Dockerfile:
FROM ubuntu
COPY test.sh ./
ENTRYPOINT [ “./test.sh”]
You need to connect the two containers together, using the docker network connect command. More information on the following webpage,
https://docs.docker.com/engine/reference/commandline/network_connect/
I'm running docker following this procedure:
$ sudo service docker start
$ docker pull epgg/eg
$ docker run -p 8081:80 --name eg -it epgg/eg bash
root#35f54d7d290f:~#
Notice at the last step it creates a root prompt root#35f54d7d290f:~#.
When I do
root#35f54d7d290f:~# exit
exit
The docker process end and the Apache inside the container is dead.
How can I exit the container safely, and how can I re-enter the docker
container prompt back.
When you run following command it performs 2 operations.
$ docker run -p 8081:80 --name eg -it epgg/eg bash
It creates a container named eg
It has only one purpose/process bash that you have overridden using cmd parameter.
That means when bash shell is over/complete/exit container has no objective to run & hence your docker container will also entered into stopped stage.
Ideally you should create a container to run apache-server as the main process (either by default entry-point or cmd).
$ docker run -p 8081:80 --name eg -d epgg/eg
And then using following command you can enter inside the running container.
$ docker exec -it eg bash
here name of your container is eg
(Note since you already have a container named "eg" you may want to remove it first)
$ docker rm -f eg
Since the purpose of any container is to launch/run any process/processes, the container stops/exits when that process is finished. So to keep running the container in the background, it must have an active process inside it.
You can use -d shorthand while running the container to detach
the container from the terminal like this:
docker run -d -it --name {container_name} {image}:{tag}
But this doesn't guarantee to run any process actively in the
background, so even in this case container will stop when the
process comes to an end.
To run the apache server actively in the background you need to use
-DFOREGROUND flag while starting the container:
/usr/sbin/httpd -DFOREGROUND (for centOS/RHEL)
This will run your apache service in background.
In other cases, to keep your services running in the detached mode
simply pass on the /bin/bash command, this will keep the bash
shell active in the background.
docker run -d -it --name {container_name} {image}:{tag} /bin/bash
Anyway, to come outside the running container without exiting the container and the process, simply press: Ctrl+P+Q.
To attach container again to the terminal use this:
docker attach {container_name}
I'm using Docker plugin for bamboo and I need to execute a script in the docker container.
The sh script contains:
echo \"ini_source_path\": \"${bamboo.ini_source_path}\",
and if I put this line directly in Container Command, the ${bamboo.ini_source_path} will be replaced with value of this variable.
The problem in when I put /bin/bashscript.sh in Container Command because I'm getting a error:
script.sh: line 35: \"${bamboo.ini_source_path}\",: bad substitution
Is there a way I can reach bamboo.ini_source_path variable from my script in docker container?
Thanks!
What version of Bamboo are you using? This problem was fixed in Bamboo 6.1.0:
Unable to use variables in Container name field in Run docker task
Workaround:
Create a Script Task that runs before the Docker Task.
Run commands like
echo "export sourcepath=$ini_source_path" > scriptname.sh
chmod +x scriptname.sh
The Docker Task will be map the ${bamboo.working.directory} to the Docker \data volume.
So the just created scriptname.sh script is available in the Docker container.The script will be executed, and will set the variable correctly.