Deploying docker container in Azure with shared volume - azure

For development: I have a node API deployed in a docker container. That docker container runs in a Linux virtual machine.
For deployment: I push the docker image to Azure (ARC) and then our admin creates the container (ACI).
I need to copy a data file named "config.json" in a shared volume "data_storage" in Azure.
I don't understand how to write the command in dockerfile that will copy the JSON file in Azure because when I build the dockerfile I am building the image, and that folder should be mapped when creating the container with "docker run -v", so not at the stage of building the image.
Any help please?

As I know, you can put the copy action and all the actions that depend on the JSON file into a script and execute this script as a command when you run this image so that you do not need the JSON file in the image creation. And when you run the image and the volume mounted, the JSON file already exists in the container and all the actions include the copy action will go ahead.
For the ACI, you can store the JSON file in the Azure File Share and mount it to the ACI, follow the steps that Mount an Azure file share in Azure Container Instances.

to copy your file to the shared volume "data_storage" you will need to ask your admin to create the volume mount:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
Assuming you are referring to a data_storage that's an Azure File.

Related

Copy file to Docker volume in Azure context

I run these docker commands locally to copy a file to a volume, this works fine:
docker container create --name temp_container -v temp_vol:/target hello-world
docker cp somefile.txt temp_container:/target/.
Now I want to do the same, but with volumes located in Azure. I have an image azureimage that I pushed and it's located in Azure, and I need to access from the container a volume with a file that I have in my local disk.
I can create the volume in an Azure context like so:
docker context use azaci
docker volume create test-volume --storage-account mystorageaccount
But when I try to copy a file to the volume pointed by a container:
docker context use azaci
docker container create --name temp_container2 -v test-volume:/target azureimage
docker cp somefile.txt temp_container2:/target/.
I get that the container and copy commands cannot be executed in the Azure context:
Command "container" not available in current context (azaci), you can
use the "default" context to run this command
Command "cp" not available in current context (azaci), you can use
the "default" context to run this command
How to copy a file from my local disk to volume in Azure context? Do I have to upload it to Azure first? Do I have to copy it to the file share?
As I know, when you mount the Azure File Share to the ACI, then you should upload the files into the File Share and the files will exist in the container instance that you mount. You can use the Azure CLI command az storage file upload or AzCopy to upload the files.
The command docker cp aims to copy files/folders between a container and the local filesystem. But the File Share is in the Azure Storage, not local. And the container is also in the Azure ACI.

Create volume for container running on Azure App Service Linux

We have a SpringBoot app which requires a keystore file located at "/secrets/app.keystore.jks" to run.
We want to run the app in a container on a Azure App Service Linux instance. And For security reasons we don't want to include the "/secrets/app.keystore.jks" file in the container itself. Instead, we have managed to upload the file to the "/home/site/wwwroot/secrets/" folder on the app service.
And we use the following command to start up the container on the app service
docker run -d myacr.azurecr.io/myAPp:latest -p 80:80 --name myApp
-e WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE -v /home/site/wwwroot/secrets:/secrets
In the app service's log, we have the error:
java.lang.IllegalStateException: java.io.IOException: Could not open
/secrets/app.keystore.jks as a file, class path resource, or URL.
It looks to me the volume was not set up and the app cannot access to the file "/secrets/app.keystore.jks"
Does anyone know how to setup a volume so the app in the container can access a file on the host?
There are two ways to achieve your purpose. One is set the environment variable WEBSITES_ENABLE_APP_SERVICE_STORAGE as true and you can mount the persistent volume to your container like belowin the docker-compose file:
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot/secrets:/secrets
Get more details here.
Another way is that you can mount the Azure Storage to your container and upload the files to the storage. Follow the steps here.

Dockerfile, mount host windows folder over server

I am trying to mount a folder of the host machine to docker container but without success. I have the following setup:
Windows machine
From 1 I access linux server
On 2 I create a docker container that should be able to access files on 1
In the dockerfile I do the following:
ADD //G/foo/boo /my_project/boo
This throws an error that the folder cannot be found, since the container tries to access the folder on linux. However, I do want the container to access the windows machine.
Ideally without copying the files from the source to target folder. I am not sure if ADD copies the files or just gives an opportunity to access files.
Volumes are designed to be attached to running containers and not to the containers used to build the docker image. In case you would like to make your running container accessing a shared file system, you need to attach the volume of the application container during the creation time. This step depends on what you are using for deploying the containers, but in case you are using docker-compose this can be done as shown below
nginxplus:
image: bhc-nginxplus
volumes:
- "${path_on_the_host}:${path_in_the_container}"
with docker commands
docker run -v ${path_on_the_host}:${path_in_the_container} $image

Azure App Service Linux Container: File Share Not Writeable

As per title, I have created an Azure App Service running a tomcat image (docker container).
When I have set up the path map to a FIle Share, the container or tomcat keeps on complaining that the folder that I mounted into the container itself is not writeable....
I read on Azure's website that File Share's mounted is Read/Write => https://learn.microsoft.com/en-us/azure/app-service/containers/how-to-serve-content-from-azure-storage
So I'm confused as to why it's still not working.... Any help would be really appreciated with this issue...
Not sure how you mounted the storage in your dockerfile, drop your snippet if possible, but I simply made the WORKDIR match the mount path. My dockerfile was a node app but shouldn't make a difference and realized that I didn't need the VOLUME keyword (it's commented).

.Net Core Docker is deleting images(pictures) while building it on DigitalOcean

I have deployed .net core 3.1 project on DigitalOcean using docker. In my project inside wwwroot directory, there is an images directory where I am uploading my pictures. After uploading, I can see pictures in the browser.
But the problem is if I am building a docker project again and running it then it doesn't show the pictures which have been previously uploaded.
My docker build command is: docker build -t "jugaadhai-service" --file Dockerfile .
and docker run command is docker run -it -d -p 0.0.0.0:2900:80 jugaadhai-service
EDIT 1: After some searching I came to know that when project is running through docker then files are getting uploaded in docker's containers directory not in projects directory. That's why images are not coming on new build.
So when a docker container is created, it's an isolated virtual environment running on a host machine. If the host machine is your local computer or some host in the cloud does not really matter, it works the same way. The container is created from the build definition in the Dockerfile.
This means you can replicate this on your local environment, try build the image, upload a few images and then delete the image or create a new image with the same tag. The images are also gone then.
If you upload images or file to a container on let's say DigitalOcean, and you redeploy a new container with a different tag, the images still lives inside the old container. Same thing if you run on let's say kubernetes, if a pod/container restart has happen, again everything is lost forever and it's as if a new container was built.
This is where volumes comes in to play. So when you have persistent data you want to store, you should store them outside of the container itself. If you want to store the images on the host machine or some other network drive, you have to specify that and map it with the container.
You can find out more about it here:
https://docs.docker.com/storage/volumes/
https://docs.docker.com/storage/
https://www.youtube.com/watch?v=Nxi0CR2lCUc
https://medium.com/bb-tutorials-and-thoughts/understanding-docker-volumes-with-an-example-d898cb5e40d7

Resources