We have a SpringBoot app which requires a keystore file located at "/secrets/app.keystore.jks" to run.
We want to run the app in a container on a Azure App Service Linux instance. And For security reasons we don't want to include the "/secrets/app.keystore.jks" file in the container itself. Instead, we have managed to upload the file to the "/home/site/wwwroot/secrets/" folder on the app service.
And we use the following command to start up the container on the app service
docker run -d myacr.azurecr.io/myAPp:latest -p 80:80 --name myApp
-e WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE -v /home/site/wwwroot/secrets:/secrets
In the app service's log, we have the error:
java.lang.IllegalStateException: java.io.IOException: Could not open
/secrets/app.keystore.jks as a file, class path resource, or URL.
It looks to me the volume was not set up and the app cannot access to the file "/secrets/app.keystore.jks"
Does anyone know how to setup a volume so the app in the container can access a file on the host?
There are two ways to achieve your purpose. One is set the environment variable WEBSITES_ENABLE_APP_SERVICE_STORAGE as true and you can mount the persistent volume to your container like belowin the docker-compose file:
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot/secrets:/secrets
Get more details here.
Another way is that you can mount the Azure Storage to your container and upload the files to the storage. Follow the steps here.
Related
I run these docker commands locally to copy a file to a volume, this works fine:
docker container create --name temp_container -v temp_vol:/target hello-world
docker cp somefile.txt temp_container:/target/.
Now I want to do the same, but with volumes located in Azure. I have an image azureimage that I pushed and it's located in Azure, and I need to access from the container a volume with a file that I have in my local disk.
I can create the volume in an Azure context like so:
docker context use azaci
docker volume create test-volume --storage-account mystorageaccount
But when I try to copy a file to the volume pointed by a container:
docker context use azaci
docker container create --name temp_container2 -v test-volume:/target azureimage
docker cp somefile.txt temp_container2:/target/.
I get that the container and copy commands cannot be executed in the Azure context:
Command "container" not available in current context (azaci), you can
use the "default" context to run this command
Command "cp" not available in current context (azaci), you can use
the "default" context to run this command
How to copy a file from my local disk to volume in Azure context? Do I have to upload it to Azure first? Do I have to copy it to the file share?
As I know, when you mount the Azure File Share to the ACI, then you should upload the files into the File Share and the files will exist in the container instance that you mount. You can use the Azure CLI command az storage file upload or AzCopy to upload the files.
The command docker cp aims to copy files/folders between a container and the local filesystem. But the File Share is in the Azure Storage, not local. And the container is also in the Azure ACI.
I am trying to mount a folder of the host machine to docker container but without success. I have the following setup:
Windows machine
From 1 I access linux server
On 2 I create a docker container that should be able to access files on 1
In the dockerfile I do the following:
ADD //G/foo/boo /my_project/boo
This throws an error that the folder cannot be found, since the container tries to access the folder on linux. However, I do want the container to access the windows machine.
Ideally without copying the files from the source to target folder. I am not sure if ADD copies the files or just gives an opportunity to access files.
Volumes are designed to be attached to running containers and not to the containers used to build the docker image. In case you would like to make your running container accessing a shared file system, you need to attach the volume of the application container during the creation time. This step depends on what you are using for deploying the containers, but in case you are using docker-compose this can be done as shown below
nginxplus:
image: bhc-nginxplus
volumes:
- "${path_on_the_host}:${path_in_the_container}"
with docker commands
docker run -v ${path_on_the_host}:${path_in_the_container} $image
For development: I have a node API deployed in a docker container. That docker container runs in a Linux virtual machine.
For deployment: I push the docker image to Azure (ARC) and then our admin creates the container (ACI).
I need to copy a data file named "config.json" in a shared volume "data_storage" in Azure.
I don't understand how to write the command in dockerfile that will copy the JSON file in Azure because when I build the dockerfile I am building the image, and that folder should be mapped when creating the container with "docker run -v", so not at the stage of building the image.
Any help please?
As I know, you can put the copy action and all the actions that depend on the JSON file into a script and execute this script as a command when you run this image so that you do not need the JSON file in the image creation. And when you run the image and the volume mounted, the JSON file already exists in the container and all the actions include the copy action will go ahead.
For the ACI, you can store the JSON file in the Azure File Share and mount it to the ACI, follow the steps that Mount an Azure file share in Azure Container Instances.
to copy your file to the shared volume "data_storage" you will need to ask your admin to create the volume mount:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
Assuming you are referring to a data_storage that's an Azure File.
I deployed a python 3 app to a docker container and it is failing due to the following:
The app reads files from windows network share drive for processing. The app runs fine when I run it from my windows machine, where I have access to the share drive.
In the remote Linux docker container, the app fails because it can't see the shared folder.
I would appreciate any advise or example on how to make the share drive visible to the docker container. Right now, in the python code, I point to the share using the os package .... example: os.listdir(path) where path is
\\\myshare.abc.com\myfolder
You need to mount shared storage inside docker container to access it.
Mount volumes like in docker run command:
docker run –volume-driver=nfs -v server/dir:/path/to/mount/point ...
where server/dir represents shared path and rest is path accessible inside container
Read more about volumes: https://docs.docker.com/storage/volumes/
I'm trying to host Jenkins in a Docker container in the Azure App Service. This means it's 'linux' hosting.
By default the jenkins/jenkins-2.110-alpine Docker image stores its data in the /var/jenkins_home folder in the container. I want this data/config persisted to Azure persistent storage so that it's persisted across container restarts.
I've read documentation and blogs stating that you can have container data persisted if it's stored in the /home folder.
So I've customized the Jenkins Dockerfile to look like this...
FROM jenkins/jenkins:2.110-alpine
USER root
RUN mkdir /home/jenkins
RUN ln -s /var/jenkins_home /home/jenkins
USER jenkins
However, when I deploy to Azure App Service I don't see the file in my /home folder (looking in Kudu console). The app starts just fine, but I lose all of my data when I restart my container.
What am I missing?
That's expected because you only persist a symlink (ln -s /var/jenkins_home /home/jenkins) on the Azure host. All the files physically exist inside the container.
To do this, you have to actually change Jenkins configuration to store all data in /home/jenkins which you have already created in your Dockerfile above.
A quick search for Jenkins data folder suggests that you set the environment variable JENKINS_HOME to your directory.
In your Dockerfile:
ENV JENKINS_HOME /home/jenkins