I'm trying to remotely interact with a container instance in Azure.
I've performed following steps:
Loaded local image on local registry
docker load -i ima.tar
Loggin in to remote ACR
docker login --username --password + login-server
Tagged the image
docker tag local-image:tag <login-server/repository-name:tag>
Pushed imaged
docker push <login-server/repository-name:tag>
If I try to run a command like this:
az container exec --resource-group myResourceGroup --name <name of cotainer group> --container-name <name of container app> --exec-command "/bin/bash"
I can successfully login to bash interactively.
My Goal is to process local file to the a remote ACI using the ACR image, something like this:
docker run -t -i --entrypoint=./executables/run.sh -v "%cd%"\..:/opt/test remote_image:tag
Is it there a way to do so? How can I run ACI e remote push file via AZ CLI?
Thx
For your purpose, I recommend you mount the Azure File Share to the ACI and then upload the files to the File Share. Finally, you can access the files in the File Share inside the ACI. Follow the steps here to mount the File Share.
Related
from Azure we try to create container using the Azure Container Instances with prepared YAML.
From the machine where we execute az container create command we can login successfully to our private registry (e.g private.dev on JFrog Artifactory ) after entering password
docker login private.dev -u svc-faselect
Login succeeded
We have YAML file for deploy, and trying to create container using the az command from the SAME server.
az container create --resource-group FRONT-SELECT-NA2 --registry-login-server="private.dev"
--registry-username=svc-faselect --registry-password="..." --file ads-azure.yaml
An error response is received from the docker registry 'private.dev'. Please retry later.
I have only one image in my YAML file.
I am having real big problem to debug why this error is returned since Error response does not provide any useful information.
Search among the similar network issues but without success:
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-troubleshoot-access
I see few moments that could be the reason of your problem.
There should be no = at az container create options
--registry-login-server --registry-password and --registry-username
https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az_container_create-examples
Command should look like
az container create --resource-group FRONT-SELECT-NA2 --registry-login-server jfrogtraining-docker-dev.jfrog.io --registry-username svc-faselect --registry-password "..." --file ads-azure.yaml
I run these docker commands locally to copy a file to a volume, this works fine:
docker container create --name temp_container -v temp_vol:/target hello-world
docker cp somefile.txt temp_container:/target/.
Now I want to do the same, but with volumes located in Azure. I have an image azureimage that I pushed and it's located in Azure, and I need to access from the container a volume with a file that I have in my local disk.
I can create the volume in an Azure context like so:
docker context use azaci
docker volume create test-volume --storage-account mystorageaccount
But when I try to copy a file to the volume pointed by a container:
docker context use azaci
docker container create --name temp_container2 -v test-volume:/target azureimage
docker cp somefile.txt temp_container2:/target/.
I get that the container and copy commands cannot be executed in the Azure context:
Command "container" not available in current context (azaci), you can
use the "default" context to run this command
Command "cp" not available in current context (azaci), you can use
the "default" context to run this command
How to copy a file from my local disk to volume in Azure context? Do I have to upload it to Azure first? Do I have to copy it to the file share?
As I know, when you mount the Azure File Share to the ACI, then you should upload the files into the File Share and the files will exist in the container instance that you mount. You can use the Azure CLI command az storage file upload or AzCopy to upload the files.
The command docker cp aims to copy files/folders between a container and the local filesystem. But the File Share is in the Azure Storage, not local. And the container is also in the Azure ACI.
My goal is to run a python script, which is passed to a docker container containing source code and dependencies. This is to be run using Azure container instances (ACI). On my home machine I can do this in docker (setting ENTRYPOINT or CMD to "python") with
docker run myimage "myscript.py"
provided I spin up the container from a directory containing myscript.py.
After some reading I thought a similar thing could be achieved on azure using az container create --command-line as indicated here. My container creation would be something like
az container create \
--resource-group myResourceGroup \
--name my-container \
--image myimage:latest \
--restart-policy Never \
--command-line "python 'myscript.py'"
However the container is unable to find myscript.py. I am using the Azure Cloud Shell. I have tried spinning up the container from a directory containing myscript.py, and I have also tried adding a file share with myscript.py inside. In both cases I get the error
python3: can't open file 'myscript.py': [Errno 2] No such file or
directory
I think I simply do not understand the containers and how they interact with the host directory structure on azure. Can anyone provide some suggestions or pointers to resources?
provided I spin up the container from a directory containing
myscript.py.
From where you issue the command to start an container instance does not matter at all. The files need to be present inside your image. Or you can also mount storage into it and read from there.
Is it possible to execute command in a container which is running under Azure WebApp service by Docker Compose?
When I create single container by az container create ..., then it works.
But when I create set of containers by Docker compose script using az webapp create --multicontainer-config-type compose ..., then it does not work.
From logs I see that there is running container myWebApp_myContainer_1 so I try:
az container exec -g myResourceGroup -n myWebApp_myContainer_1 --exec-command "/bin/bash"
With this result:
The Resource 'Microsoft.ContainerInstance/containerGroups/myWebApp_myContainer_1'
under resource group 'myResourceGroup' was not found.
Then I try:
az container exec -g myResourceGroup -n myWebApp --container-name myWebApp_myContainer_1 --exec-command "/bin/bash"
With this result:
The Resource 'Microsoft.ContainerInstance/containerGroups/myWebApp' under resource group 'myResourceGroup' was not found.
Note that it is normally possible to execute commands in containers started by Docker compose script on local Docker (out of Azure).
Update I don't like to install SSH server into Docker images. It is a bad approach. I'm looking for a way of direct exec like az container exec does.
Thank you for any hint.
For your issue, when the web app created from a Docker image, it's just a web app, not a container. So you cannot use the command az container exec to connect to it.
If you really want to connect to the web app that created from a Docker image, there are two ways as I know to achieve it.
The one is that I describe in the comment, you should install the OpenSSH server in the Docker image. Then ssh into the web app from the port exposed to the Internet.
The other one as you wish is using the command az webapp remote-connection. For more details, you can read Open SSH session from remote shell.
I have created a container with a name of "tensorflow-syntaxnet_container" using docker command docker run -d -name tensorflow_syntaxnet_container syntaxnet_image and the other container "python_flask" with linking the tensorflow-syntaxnet_container using docker command docker run -d -p 0.0.0.0:5001:5001 --link tensorflow-syntaxnet_container:syntaxnet -name python_flask python_image:latest and both the containers are create successfully and its working as expected individually.
Also verified that python_flask container lined with syntaxnet container as cat /etc/hosts results 172.17.0.27 syntaxnet e002ab9f43a7 tensorflow-syntaxnet_container which is container ip of syntaxnet container.
i need call a script file demo.sh and need to access output file in connect with tensorflow-syntaxnet_container from my python flask application located at python_flask container
i could not find any mount directory or folder of linked container tensorflow-syntaxnet_container
Could any one can help me out how to call the script file and access all file from one container to another
Check it here Docker Link Docks
and here Docker Volume Docs
With --link you only can link network interfaces of container. If you want to share filesystem, you will, probably, need --volumes-from.
docker create -v /path/to/script/dir --name scriptstore image1:name
docker run -d --volumes-from scriptstore --name runner image2:name bash /path/to/script/dir/scriptname.sh
Linking two containers enables network communication. What you want are volumes: https://docs.docker.com/engine/tutorials/dockervolumes/#/creating-and-mounting-a-data-volume-container