I am trying to mount a folder of the host machine to docker container but without success. I have the following setup:
Windows machine
From 1 I access linux server
On 2 I create a docker container that should be able to access files on 1
In the dockerfile I do the following:
ADD //G/foo/boo /my_project/boo
This throws an error that the folder cannot be found, since the container tries to access the folder on linux. However, I do want the container to access the windows machine.
Ideally without copying the files from the source to target folder. I am not sure if ADD copies the files or just gives an opportunity to access files.
Volumes are designed to be attached to running containers and not to the containers used to build the docker image. In case you would like to make your running container accessing a shared file system, you need to attach the volume of the application container during the creation time. This step depends on what you are using for deploying the containers, but in case you are using docker-compose this can be done as shown below
nginxplus:
image: bhc-nginxplus
volumes:
- "${path_on_the_host}:${path_in_the_container}"
with docker commands
docker run -v ${path_on_the_host}:${path_in_the_container} $image
Related
For copying files/folders from host to container or vice versa I can see docker commands like -
docker cp foo.txt mycontainer:/foo.txt
docker cp mycontainer:/foo.txt foo.txt
But I've a shared folder from remote location, which I need to copy inside of docker container.
i.e. my pipeline runs on a host A(could be Windows/Linux), shared folder is on remote host B(which is going to be always Windows machine) and need to copy that folder from host B to container running on host A
Is there any docker command to achieve this?
In our case, host A could be Windows or Linux machine hence I'll have to handle it differently hence looking for some docker command irrespective of OS.
Shared folder is located on remote host B using the SMB protocol. As I know, docker does not support any direct command to copy files via SMB protocol to your container.
In your case, let everything work smoothly. You will need to create one more container with the task of running samba-client. Then you can get the file from host B and save it to host A with a command like this:
smbget -R smb://hostB/directory
Then you can copy the files from host A into the container as usual.
I have deployed .net core 3.1 project on DigitalOcean using docker. In my project inside wwwroot directory, there is an images directory where I am uploading my pictures. After uploading, I can see pictures in the browser.
But the problem is if I am building a docker project again and running it then it doesn't show the pictures which have been previously uploaded.
My docker build command is: docker build -t "jugaadhai-service" --file Dockerfile .
and docker run command is docker run -it -d -p 0.0.0.0:2900:80 jugaadhai-service
EDIT 1: After some searching I came to know that when project is running through docker then files are getting uploaded in docker's containers directory not in projects directory. That's why images are not coming on new build.
So when a docker container is created, it's an isolated virtual environment running on a host machine. If the host machine is your local computer or some host in the cloud does not really matter, it works the same way. The container is created from the build definition in the Dockerfile.
This means you can replicate this on your local environment, try build the image, upload a few images and then delete the image or create a new image with the same tag. The images are also gone then.
If you upload images or file to a container on let's say DigitalOcean, and you redeploy a new container with a different tag, the images still lives inside the old container. Same thing if you run on let's say kubernetes, if a pod/container restart has happen, again everything is lost forever and it's as if a new container was built.
This is where volumes comes in to play. So when you have persistent data you want to store, you should store them outside of the container itself. If you want to store the images on the host machine or some other network drive, you have to specify that and map it with the container.
You can find out more about it here:
https://docs.docker.com/storage/volumes/
https://docs.docker.com/storage/
https://www.youtube.com/watch?v=Nxi0CR2lCUc
https://medium.com/bb-tutorials-and-thoughts/understanding-docker-volumes-with-an-example-d898cb5e40d7
I deployed a python 3 app to a docker container and it is failing due to the following:
The app reads files from windows network share drive for processing. The app runs fine when I run it from my windows machine, where I have access to the share drive.
In the remote Linux docker container, the app fails because it can't see the shared folder.
I would appreciate any advise or example on how to make the share drive visible to the docker container. Right now, in the python code, I point to the share using the os package .... example: os.listdir(path) where path is
\\\myshare.abc.com\myfolder
You need to mount shared storage inside docker container to access it.
Mount volumes like in docker run command:
docker run –volume-driver=nfs -v server/dir:/path/to/mount/point ...
where server/dir represents shared path and rest is path accessible inside container
Read more about volumes: https://docs.docker.com/storage/volumes/
I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container.
docker cp won't work because that command is run from the host. I
want to trigger the copy from the container
mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container
I am not sure scp is an option since I don't have the login/password/keys to the host from the running container
Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options?
Thanks
I don't think you have many options. An idea is that if:
the host has a web server (or FTP server) up and running
and the file is located in the appropriate directory (so that it can be served)
maybe you can use wget or curl to get the file. Keep in mind that you might need credentials though...
IMHO, if what you are asking for is doable, it is a security hole.
Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required.
You could validate the same in docker entry point script.
I'm trying to use a stack built with Docker container to run a Symfony2 application (SfDocker). The stack consists of interlinked containers where ubuntu:14.04 is a base:
mysql db
nginx
php-fpm
The recurring problem that I'm facing is managing directory permission inside the container. When I mount a vloume from the host, e.g.
volumes:
- symfony-code:/var/www/app
The mounted directories will always be owned by root or an unidentified user (only user ID visible when running ls -al) inside the container.
This, essentially, makes it impossible to access the application through the browser. Of course running chown -R root:www-data on public directories solves the problem, but as soon as I want to write to e.g. 'cache' directory as from the host (where the user is ltarasiewicz) I'd get permission denied error. On top of that, whenever an application running inside a container creates new directories (e.h. 'logs'), they again are owned byroot and later inaccessible by the browser or my desktop user.
So my question are:
How I should manage permission accross the host and container
environments (when I want to run commands on the container from both
environments) ?
Is it possible to configure Docker so that directories mounted as volumes receive specific ownership/permissions (e.g. 'root:www-data') automatically?
Am I free to create new users and user groups inside my 'nginx' container built from the Ubuntu:14.04 image ?
A few general points, apologies if I don't answer your questions directly.
Don't run as root in the container. Create a user in the Dockerfile and switch to it, either with the USER statement or in an entrypoint or command script. See the Redis official image for a good example of this. (So the answer to Q3 is yes, and do, but via a Dockerfile - don't make changes to containers by hand).
Note that the official images often do a chown on volumes in the entrypoint script to avoid this issue you describe in 2.
Consider using a data container rather than linking directly to host directories. See the official docs for more information.
Don't run commands from the host on the volumes. Just create a temporary container to do it or use docker exec (e.g. docker run -v /myvol:/myvol myimage touch /myvol/x).