I run these docker commands locally to copy a file to a volume, this works fine:
docker container create --name temp_container -v temp_vol:/target hello-world
docker cp somefile.txt temp_container:/target/.
Now I want to do the same, but with volumes located in Azure. I have an image azureimage that I pushed and it's located in Azure, and I need to access from the container a volume with a file that I have in my local disk.
I can create the volume in an Azure context like so:
docker context use azaci
docker volume create test-volume --storage-account mystorageaccount
But when I try to copy a file to the volume pointed by a container:
docker context use azaci
docker container create --name temp_container2 -v test-volume:/target azureimage
docker cp somefile.txt temp_container2:/target/.
I get that the container and copy commands cannot be executed in the Azure context:
Command "container" not available in current context (azaci), you can
use the "default" context to run this command
Command "cp" not available in current context (azaci), you can use
the "default" context to run this command
How to copy a file from my local disk to volume in Azure context? Do I have to upload it to Azure first? Do I have to copy it to the file share?
As I know, when you mount the Azure File Share to the ACI, then you should upload the files into the File Share and the files will exist in the container instance that you mount. You can use the Azure CLI command az storage file upload or AzCopy to upload the files.
The command docker cp aims to copy files/folders between a container and the local filesystem. But the File Share is in the Azure Storage, not local. And the container is also in the Azure ACI.
Related
I'm trying to remotely interact with a container instance in Azure.
I've performed following steps:
Loaded local image on local registry
docker load -i ima.tar
Loggin in to remote ACR
docker login --username --password + login-server
Tagged the image
docker tag local-image:tag <login-server/repository-name:tag>
Pushed imaged
docker push <login-server/repository-name:tag>
If I try to run a command like this:
az container exec --resource-group myResourceGroup --name <name of cotainer group> --container-name <name of container app> --exec-command "/bin/bash"
I can successfully login to bash interactively.
My Goal is to process local file to the a remote ACI using the ACR image, something like this:
docker run -t -i --entrypoint=./executables/run.sh -v "%cd%"\..:/opt/test remote_image:tag
Is it there a way to do so? How can I run ACI e remote push file via AZ CLI?
Thx
For your purpose, I recommend you mount the Azure File Share to the ACI and then upload the files to the File Share. Finally, you can access the files in the File Share inside the ACI. Follow the steps here to mount the File Share.
We have a SpringBoot app which requires a keystore file located at "/secrets/app.keystore.jks" to run.
We want to run the app in a container on a Azure App Service Linux instance. And For security reasons we don't want to include the "/secrets/app.keystore.jks" file in the container itself. Instead, we have managed to upload the file to the "/home/site/wwwroot/secrets/" folder on the app service.
And we use the following command to start up the container on the app service
docker run -d myacr.azurecr.io/myAPp:latest -p 80:80 --name myApp
-e WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE -v /home/site/wwwroot/secrets:/secrets
In the app service's log, we have the error:
java.lang.IllegalStateException: java.io.IOException: Could not open
/secrets/app.keystore.jks as a file, class path resource, or URL.
It looks to me the volume was not set up and the app cannot access to the file "/secrets/app.keystore.jks"
Does anyone know how to setup a volume so the app in the container can access a file on the host?
There are two ways to achieve your purpose. One is set the environment variable WEBSITES_ENABLE_APP_SERVICE_STORAGE as true and you can mount the persistent volume to your container like belowin the docker-compose file:
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot/secrets:/secrets
Get more details here.
Another way is that you can mount the Azure Storage to your container and upload the files to the storage. Follow the steps here.
For development: I have a node API deployed in a docker container. That docker container runs in a Linux virtual machine.
For deployment: I push the docker image to Azure (ARC) and then our admin creates the container (ACI).
I need to copy a data file named "config.json" in a shared volume "data_storage" in Azure.
I don't understand how to write the command in dockerfile that will copy the JSON file in Azure because when I build the dockerfile I am building the image, and that folder should be mapped when creating the container with "docker run -v", so not at the stage of building the image.
Any help please?
As I know, you can put the copy action and all the actions that depend on the JSON file into a script and execute this script as a command when you run this image so that you do not need the JSON file in the image creation. And when you run the image and the volume mounted, the JSON file already exists in the container and all the actions include the copy action will go ahead.
For the ACI, you can store the JSON file in the Azure File Share and mount it to the ACI, follow the steps that Mount an Azure file share in Azure Container Instances.
to copy your file to the shared volume "data_storage" you will need to ask your admin to create the volume mount:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
Assuming you are referring to a data_storage that's an Azure File.
I want to mount my usb drive into a running docker instance for manually backup of some files.
I know of the -v feature of docker run, but this creates a new container.
Note: its a nextcloudpi container.
You can only change a very limited set of container options after a container starts up. Options like environment variables and container mounts can only be set during the initial docker run or docker create. If you want to change these, you need to stop and delete your existing container, and create a new one with the new mount option.
If there's data that you think you need to keep or back up, it should live in some sort of volume mount anyways. Delete and restart your container and use a -v option to mount a volume on where the data is kept. The Docker documentation has an example using named volumes with separate backup and restore containers; or you can directly use a host directory and your normal backup solution there. (Deleting and recreating a container as I suggested in the first paragraph is extremely routine, and this shouldn't involve explicit "backup" and "restore" steps.)
If you have data that's there right now that you can't afford to lose, you can docker cp it out of the container before setting up a more robust storage scheme.
As David Maze mentioned, it's almost impossible to change the volume location of an existing container by using normal docker commands.
I found an alternative way that works for me. The main idea is convert the existing container to a new docker image and initialize a new docker container on top of it. Hope works for you too.
# Create a new image from the container
docker commit CONTAINERID NEWIMAGENAME
# Create a new container on the top of the new image
docker run -v HOSTLOCATION:CONTAINERLOCATION NEWIMAGENAME
I know the question is from May, but for future searchers:
Create a mounting point on the host filesystem:
sudo mkdir /mnt/usb-drive
Run the docker container using the --mount option and set the "bind propagation" to "shared":
docker run --name mynextcloudpi -it --mount type=bind,source=/mnt/usb-drive,target=/mnt/disk,bind-propagation=shared nextcloudpi
Now you can mount your USB drive to the /mnt/usb-drive directory and it will be mounted to the /mnt/disk location inside the running container.
E.g: sudo mount /dev/sda1 /mnt/usb-drive
Change the /dev/sda1, of course.
More info about bind-propagation: https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation
I have deployed a genezys/gitlab docker image on a host:
docker run --name gitlab_data genezys/gitlab:7.5.2 /bin/true
docker run --detach --name gitlab --publish 8080:80 --publish 2222:22 --volumes-from gitlab_data genezys/gitlab:7.5.2
Now I want to backup the code repository in case the host is crashed.
I am a little confused about the backup policy: Since I have created gitlab_data container for storage purpose, should I backup the whole gitlab_data docker image? Or I just use gitlab rake to backup the code repository? Or are there any better methods?
Only the official backup process should be needed.
Backup of the image should not: you would only docker run the same image again, with the right parameter to restore the app:
docker run --name=gitlab -it --rm [OPTIONS] \
sameersbn/gitlab:7.10.1 app:rake gitlab:backup:restore
Backuping the image doesn't really make sense: the image should only be the app, which can be save and exported with docker save. Any persistent data should be backed up independently.
Plus:
An application backup (like the app:rake task) isn't the same as "saving an image" (an image is just a filesystem).
When you do an application backup (app:rake here), you can do additional jobs in order to ensure the consistency and integrity of the data you are about to backup. You are not simply compressing folders.
Thomasleveil adds in the comments:
You cannot backup your git repo by making a backup of the docker container into a docker image... because the gitlab image defines volumes for /home/git/data and /var/log/gitlab.
So any data written to those paths in the docker container IS NOT written on the docker container file system. As a result the docker export or docker commit commands would not include the content of those paths
In case of a data container, the OP adds:
I use docker commit to save gitlab_data container as a new image, then restart the gitlab container using the new image as a volume, but find all the previous data doesn't exist (including code repository).
You don't restart gitlab with "the new (data) image": you need to create a container from that gitlab_data_image you have committed, and then restart gitlab using the new_gitlab_data container created from the committed gitlab_data_image.
docker create --name="new_gitlab_data" gitlab_data_image
docker run gitlab --volumes-from=new_gitlab_data
Additional information:
Data stored on a volume from a "data container" is not actually "in" the container. It is actually in a non-obvious directory on the host. So docker commit of the data container does not include the data stored on the volume.
To back up data from a Docker data container, you should mount a volume from the host and use --volumes-from your_data_container to access the data container data. Then copy from the data container to the mounted host volume. That process is described with more detail in the Docker docs, but here is a shorthand version:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdatadir
Where "dbdata" is your data container and "dbdatadir" is the location of the data you want to back up in the container.