As per title, I have created an Azure App Service running a tomcat image (docker container).
When I have set up the path map to a FIle Share, the container or tomcat keeps on complaining that the folder that I mounted into the container itself is not writeable....
I read on Azure's website that File Share's mounted is Read/Write => https://learn.microsoft.com/en-us/azure/app-service/containers/how-to-serve-content-from-azure-storage
So I'm confused as to why it's still not working.... Any help would be really appreciated with this issue...
Not sure how you mounted the storage in your dockerfile, drop your snippet if possible, but I simply made the WORKDIR match the mount path. My dockerfile was a node app but shouldn't make a difference and realized that I didn't need the VOLUME keyword (it's commented).
Related
For development: I have a node API deployed in a docker container. That docker container runs in a Linux virtual machine.
For deployment: I push the docker image to Azure (ARC) and then our admin creates the container (ACI).
I need to copy a data file named "config.json" in a shared volume "data_storage" in Azure.
I don't understand how to write the command in dockerfile that will copy the JSON file in Azure because when I build the dockerfile I am building the image, and that folder should be mapped when creating the container with "docker run -v", so not at the stage of building the image.
Any help please?
As I know, you can put the copy action and all the actions that depend on the JSON file into a script and execute this script as a command when you run this image so that you do not need the JSON file in the image creation. And when you run the image and the volume mounted, the JSON file already exists in the container and all the actions include the copy action will go ahead.
For the ACI, you can store the JSON file in the Azure File Share and mount it to the ACI, follow the steps that Mount an Azure file share in Azure Container Instances.
to copy your file to the shared volume "data_storage" you will need to ask your admin to create the volume mount:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
Assuming you are referring to a data_storage that's an Azure File.
I have a container on Azure. When the container starts, it will run a script to modify some configuration files under /var/lib/myservice/conf/. I also want to mount an Azure Files volume in this container with volume mount path is /var/lib/myservice/. The problem is that the container cannot run successfully. If I change the volume path to /var/lib/myservice/logs/ it will start successfully. I think the problem is because when mouting, my script cannot find the configuration files so it cannot modify it. Folder /logs is intact so the container starts successfully.
I'm sorry if my question maybe a bit confusing. Any one can help me how to mouting directory /var/lib/myservice/ successfully ? Thank you very much.
The problem is that if you mount the Azure Files volumes to the path /var/lib/myservice/, then the volume will overwrite the path and leave it empty as the Azure files. But the files in that path are necessary for your service initial. So the container cannot run successfully.
The logs are not necessary for your service initial, so it does not affect your service when you mount to the path of the log.
I deployed a python 3 app to a docker container and it is failing due to the following:
The app reads files from windows network share drive for processing. The app runs fine when I run it from my windows machine, where I have access to the share drive.
In the remote Linux docker container, the app fails because it can't see the shared folder.
I would appreciate any advise or example on how to make the share drive visible to the docker container. Right now, in the python code, I point to the share using the os package .... example: os.listdir(path) where path is
\\\myshare.abc.com\myfolder
You need to mount shared storage inside docker container to access it.
Mount volumes like in docker run command:
docker run –volume-driver=nfs -v server/dir:/path/to/mount/point ...
where server/dir represents shared path and rest is path accessible inside container
Read more about volumes: https://docs.docker.com/storage/volumes/
I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container.
docker cp won't work because that command is run from the host. I
want to trigger the copy from the container
mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container
I am not sure scp is an option since I don't have the login/password/keys to the host from the running container
Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options?
Thanks
I don't think you have many options. An idea is that if:
the host has a web server (or FTP server) up and running
and the file is located in the appropriate directory (so that it can be served)
maybe you can use wget or curl to get the file. Keep in mind that you might need credentials though...
IMHO, if what you are asking for is doable, it is a security hole.
Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required.
You could validate the same in docker entry point script.
I have a docker container, which is hosting jupyter notebook server on my PC, that has mounted directory from local host. Let's call this directory /docker-mount.
Next, I created new directory under the directory /docker-mount, like /docker-mount/files, and then I mounted some cifs based storage from other PC's file system on the /docker-mount/files directory.
I expected for docker container's file system to be available to use this network mount, but it's only available with locally created directory files, but not all contents that are mounted inside the files.
I assume this is how linux file system works, but still not confident of that idea.
Is there any way to make this possible?
I suggest that you mount your cifs shared drive as a Docker volume instead. Relying on a shared drive with your host computer is not reliable in my experience, specially with respect to file changes being reflected in the Docker world. Beside, your production environment won't have this shared drive with your development host.
create a docker volume using the Netshare cifs driver.
http://netshare.containx.io/docs/cifs#creating-a-volume-with-docker-volume
Then mount your volume normally on any container that requires access to the cifs drive.