I deployed a python 3 app to a docker container and it is failing due to the following:
The app reads files from windows network share drive for processing. The app runs fine when I run it from my windows machine, where I have access to the share drive.
In the remote Linux docker container, the app fails because it can't see the shared folder.
I would appreciate any advise or example on how to make the share drive visible to the docker container. Right now, in the python code, I point to the share using the os package .... example: os.listdir(path) where path is
\\\myshare.abc.com\myfolder
You need to mount shared storage inside docker container to access it.
Mount volumes like in docker run command:
docker run –volume-driver=nfs -v server/dir:/path/to/mount/point ...
where server/dir represents shared path and rest is path accessible inside container
Read more about volumes: https://docs.docker.com/storage/volumes/
Related
I am trying to mount a folder of the host machine to docker container but without success. I have the following setup:
Windows machine
From 1 I access linux server
On 2 I create a docker container that should be able to access files on 1
In the dockerfile I do the following:
ADD //G/foo/boo /my_project/boo
This throws an error that the folder cannot be found, since the container tries to access the folder on linux. However, I do want the container to access the windows machine.
Ideally without copying the files from the source to target folder. I am not sure if ADD copies the files or just gives an opportunity to access files.
Volumes are designed to be attached to running containers and not to the containers used to build the docker image. In case you would like to make your running container accessing a shared file system, you need to attach the volume of the application container during the creation time. This step depends on what you are using for deploying the containers, but in case you are using docker-compose this can be done as shown below
nginxplus:
image: bhc-nginxplus
volumes:
- "${path_on_the_host}:${path_in_the_container}"
with docker commands
docker run -v ${path_on_the_host}:${path_in_the_container} $image
As per title, I have created an Azure App Service running a tomcat image (docker container).
When I have set up the path map to a FIle Share, the container or tomcat keeps on complaining that the folder that I mounted into the container itself is not writeable....
I read on Azure's website that File Share's mounted is Read/Write => https://learn.microsoft.com/en-us/azure/app-service/containers/how-to-serve-content-from-azure-storage
So I'm confused as to why it's still not working.... Any help would be really appreciated with this issue...
Not sure how you mounted the storage in your dockerfile, drop your snippet if possible, but I simply made the WORKDIR match the mount path. My dockerfile was a node app but shouldn't make a difference and realized that I didn't need the VOLUME keyword (it's commented).
I have deployed .net core 3.1 project on DigitalOcean using docker. In my project inside wwwroot directory, there is an images directory where I am uploading my pictures. After uploading, I can see pictures in the browser.
But the problem is if I am building a docker project again and running it then it doesn't show the pictures which have been previously uploaded.
My docker build command is: docker build -t "jugaadhai-service" --file Dockerfile .
and docker run command is docker run -it -d -p 0.0.0.0:2900:80 jugaadhai-service
EDIT 1: After some searching I came to know that when project is running through docker then files are getting uploaded in docker's containers directory not in projects directory. That's why images are not coming on new build.
So when a docker container is created, it's an isolated virtual environment running on a host machine. If the host machine is your local computer or some host in the cloud does not really matter, it works the same way. The container is created from the build definition in the Dockerfile.
This means you can replicate this on your local environment, try build the image, upload a few images and then delete the image or create a new image with the same tag. The images are also gone then.
If you upload images or file to a container on let's say DigitalOcean, and you redeploy a new container with a different tag, the images still lives inside the old container. Same thing if you run on let's say kubernetes, if a pod/container restart has happen, again everything is lost forever and it's as if a new container was built.
This is where volumes comes in to play. So when you have persistent data you want to store, you should store them outside of the container itself. If you want to store the images on the host machine or some other network drive, you have to specify that and map it with the container.
You can find out more about it here:
https://docs.docker.com/storage/volumes/
https://docs.docker.com/storage/
https://www.youtube.com/watch?v=Nxi0CR2lCUc
https://medium.com/bb-tutorials-and-thoughts/understanding-docker-volumes-with-an-example-d898cb5e40d7
I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container.
docker cp won't work because that command is run from the host. I
want to trigger the copy from the container
mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container
I am not sure scp is an option since I don't have the login/password/keys to the host from the running container
Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options?
Thanks
I don't think you have many options. An idea is that if:
the host has a web server (or FTP server) up and running
and the file is located in the appropriate directory (so that it can be served)
maybe you can use wget or curl to get the file. Keep in mind that you might need credentials though...
IMHO, if what you are asking for is doable, it is a security hole.
Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required.
You could validate the same in docker entry point script.
Been banging my head with this, and would appreciate any help! I did search, and found a few potential solutions, but all were a no go for me.
I have a nodejs application developed in windows, and is now running in a docker container (using the node:latest image). The application needs to connect to a shared windows network drive to perform read/writes, but it doesn't seem to be working when the application runs inside the docker container (It works fine when it is run on my windows machine outside of docker).
The network drive shared folder looks something like this. Shared to everyone:
\\myserver\test
The Nodejs code (not quite but gives you an idea of how I am accessing file):
let fileLocation = "\\myserver\test\file.pdf"
let readStream = fs.createReadStream(fileLocation)
// When the stream is done being read, end the response
readStream.on('close', () =>
{
// End response
response.status('200').end()
// Remove the file
fs.unlinkSync(fileLocation)
})
// Stream chunks to response
readStream.pipe(response)
This works fine when run in nodejs on windows, but I keep getting a "ENOENT: no such file or directory" error when run inside the nodejs docker container. I'm not sure if this is a linux OS (which the node image is based on) or a docker permission/priviledge issue. The file does exist.
This is the command I'm using to test when I'm running the container
docker run --rm -it -p 8080:8080 myapplication:latest
New to docker, but Any help is appreciated!
Try mounting the volume using docker run -v (see documentation).
I'm hoping it will work on your Windows network share drive.
Update
Syntax of volume mounting in Windows is:
docker run -v c:/Users:/data alpine
Where c:/Users is the Windows directory on the host machine you'd like to mount (equivalent to C:\Users in Windows syntax), and data is the name of the mounted directory inside the running container.