Docker mount files to my local system while running the container - node.js

I am using an image academind/node-example-1 which is a simple node image. You can check it https://hub.docker.com/r/academind/node-example-1 here. What I want while I run the image I want to get the same folder & file structure that is there in the Image. I know that can be done via volume. When I use like
docker run -d --rm --name node-test -p 5000:80 academind/node-example-1
Everything is proper But I want to get the codebase while running, so I tried like
docker run -d --rm --name node-test -p 5000:80 -v /Users/souravbanerjee/Documents/node-docker:node-example-1 academind/node-example-1
Here node-docker is my local folder where I expect the code to be. It runs but not getting the files in the local machine, I'm in doubt here where the source_path:destination_path. Please correct me to please tell me where I'm wrong, or what to do, or my entire thinking is going in the wrong direction or not.
Thanks.

If you read the official doc, you'll see that the first part of the : should be the path somewhere in the host machine (which you're doing), while the later part should match the path "inside" the container (instead you're using image name). Assuming /app being the path (I've taken that course by myself and this is the path AFAIR), it should be:
docker run -d --rm --name node-test -p 5000:80 -v /Users/souravbanerjee/Documents/node-docker:/app academind/node-example-1

I think the correct syntax is to enter the drive information in the volume mapping.
Eg. Users/xxxx/:some/drive/location
However, that would map your empty drive at xxxx over the top of 'location' folder. Thus deleting the existing files in the container.
If you are interested in seeing the contents of the files in the container, you should consider using 'Docker CP' command.
People often use volume mounts to push data (i.e. persistent database files) into a container.
Alternatively, writing log files to the volume mounted location inside the application container. Then those files are then reflected on your local drive

You can copy the files to the current host directory using the command
docker cp node-test:/app .
when the container is running

Related

mounting volume from inside the container in local directory [duplicate]

Assume that i have an application with this simple Dockerfile:
//...
RUN configure.sh --logmyfiles /var/lib/myapp
ENTRYPOINT ["starter.sh"]
CMD ["run"]
EXPOSE 8080
VOLUME ["/var/lib/myapp"]
And I run a container from that:
sudo docker run -d --name myapp -p 8080:8080 myapp:latest
So it works properly and stores some logs in /var/lib/myapp of docker container.
My question
I need these log files to automatically saved in host too, So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server (without removing current container) ?
Edit
I also see Docker - Mount Directory From Container to Host, but it doesn't solve my problem i need a way to backup my files from docker to host.
First, a little information about Docker volumes. Volume mounts occur only at container creation time. That means you cannot change volume mounts after you've started the container. Also, volume mounts are one-way only: From the host to the container, and not vice-versa. When you specify a host directory mounted as a volume in your container (for example something like: docker run -d --name="foo" -v "/path/on/host:/path/on/container" ubuntu), it is a "regular ole" linux mount --bind, which means that the host directory will temporarily "override" the container directory. Nothing is actually deleted or overwritten on the destination directory, but because of the nature of containers, that effectively means it will be overridden for the lifetime of the container.
So, you're left with two options (maybe three). You could mount a host directory into your container and then copy those files in your startup script (or if you bring cron into your container, you could use a cron to periodically copy those files to that host directory volume mount).
You could also use docker cp to move files from your container to your host. Now that is kinda hacky and definitely not something you should use in your infrastructure automation. But it does work very well for that exact purpose. One-off or debugging is a great situation for that.
You could also possibly set up a network transfer, but that's pretty involved for what you're doing. However, if you want to do this regularly for your log files (or whatever), you could look into using something like rsyslog to move those files off your container.
So how can i mount the /var/lib/myapp from the container to the /var/lib/myapp in host server
That is the opposite: you can mount an host folder to your container on docker run.
(without removing current container)
I don't think so.
Right now, you can check docker inspect <containername> and see if you see your log in the /var/lib/docker/volumes/... associated to the volume from your container.
Or you can redirect the result of docker logs <containername> to an host file.
For more example, see this gist.
The alternative would be to mount a host directory as the log folder and then access the log files directly on the host.
me#host~$ docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <certs-dir>:/etc/nginx/certs -v <log-dir>:/var/log/nginx dockerfile/nginx
me#host~$ ls <log-dir>
(again, that apply to a container that you start, not an existing running one)

Nvidia-docker add folder to container

I'm pretty new to docker containers. I understand there are ADD and COPY operations so a container can see files. How does one give the container access to a given directory where I can put my datasets?
Let's say I have a /home/username/dataset directory how do I make it at /dataset or something in the docker container so I can reference it?
Is there a way for the container to reference a directory on the main system so you don't have to have duplicate files. Some of these datasets will be quite large and while I can delete the original after copying it over .. that's just annoying if I want to do something outside the docker container with the files.
You cannot do that during the build time. If you want to do it during build time then you need to copy it into the context
Or else when you run the container you need to do a volume bind mount
docker run -it -v /home/username/dataset:/dataset <image>
Directories on host can be mapped to directories inside container.
If you are using docker run to start your container, then you can include -v flag to include volumes.
docker run --rm -v "/home/username/dataset:/dataset" <image_name>
If you are using a compose file, you may include volumes using:
volumes:
- /home/<username>/dataset:/dataset
For a detailed description of how to use volumes, you may visit Use volumes in docker

Docker -- mounting a volume not behaving like regular mount

I am new to docker so I am certain I am doing something wrong. I am also not a php developer but that shouldn't matter in this case.
I am using a drupal docker image which has data at the /var/www/html directory.
I am attempting to overwrite this data with a drupal site from a local directory on the host system.
According to the docs this is the expected behavior
Mount a host directory as a data volume
In addition to creating a
volume using the -v flag you can also mount a directory from your
Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp
python app.py
This command mounts the host directory, /src/webapp,
into the container at /webapp. If the path /webapp already exists
inside the container’s image, the /src/webapp mount overlays but does
not remove the pre-existing content. Once the mount is removed, the
content is accessible again. This is consistent with the expected
behavior of the mount command.
However I am finding that the local drupal site files do not exist on the container. My complete workflow is as follows:
docker-compose.yml
drupal:
container_name: empower_drupal
build: ./build/drupal-local-codebase
ports:
- "8888:80"
- "8022:22"
- "443"
#volumes: THIS IS ALSO NOT WORKING
#- /home/sameh/empower-tap:/var/www/html
$ docker-compose up -d
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
$ docker run -t -i empower_drupal2 /bin/bash
The empower_drupal2 container does not have the correct files from the /home/sameh/empower-tap directory.
Why this did not work
Here's what you did, with some annotations.
$ docker-compose up -d
Given your docker-compose.yml, with the volumes section commented out, at this point you have running container, but no volumes mounted.
# edit the container by snapshotting it
$ docker commit empower_drupal empower_drupal1
All you've really done here is made a copy of the image you had already, unless your container makes changes to itself on startup.
$ docker run -d -P --name empower_drupal2 -v /home/sameh/empower-tap:/var/ww/html empower_drupal1
Here you have run your new copy, mounted a volume. Ok, the files are available in this container now.
# snapshot the container to examine it
$ docker commit 9cfeca48efd3 empower_drupal2
I'm assuming here that you wanted to commit the contents of the volume into the image. That will not work. The commit documentation is clear about this point:
The commit operation will not include any data contained in volumes mounted inside the container.
$ docker run -t -i empower_drupal2 /bin/bash
So, as you found, when you run the image generated by commit, but without volume mounts, the files are not there.
Also, it is not clear in your docker-compose.yml example where the volumes: section was before it was commented out. Currently it seems to be on the left margin, which would not work. It would need to be at the same level as build: and ports: in order to work on your drupal service.
What to do instead
That depends on your goal.
Just copy the files from local
If you literally just want to populate the image with the files from your local system, you can do that in Dockerfile.
COPY local-dir/* /var/www/html
You mentioned that this copy can't work because the directory is not local. Unfortunately that cannot be solved easily with something like a symlink. Your best option is to copy the directory to the local context before building. Docker does not plan to change this behavior.
Override contents for development
A common scenario is you want to use your local directory for development, so that changes are reflected right away instead of doing a rebuild. But when not doing development, you want the files baked into the image.
In that case, start by telling Dockerfile to copy the files into the image, as above. That way an image build will contain them, volume mount or no.
Then, when you are doing development, use volumes: in docker-compose.yml, or the -v flag to docker run, to mount a volume. A volume mount will override whatever is baked into the image, so you will be using your local files. When you're done and the code is ready to go, just do an image build and your final files will be baked into the image for deployment.
Use a volume plus a commit
You can also do this in a slightly roundabout way by mounting the volume, copying the contents elswhere, then committing the result.
# start a container with the volume mounted somewhere
docker run -d -v /home/sameh/empower-tap:/var/www/html_temp [...etc...]
# copy the files elsewhere inside the container
docker exec <container-name> cp -r /var/www/html_temp /var/www/html
# commit the result
docker commit empower_drupal empower_drupal1
Then you should have your mounted volume files in the resulting image.

How to mount a directory inside a docker container on Linux host [duplicate]

This question already has answers here:
Mount directory in Container and share with Host
(3 answers)
Closed 4 years ago.
I would like to mount a directory from a docker container to the local filesystem. the directory is a website root and I need to be able to edit it my local machine using any editor.
I know I can run docker run -v local_path:container_path but doing that it is only creating an empty directory inside the container.
How can one mount a directory inside a docker container on linux host?
It is a bit weird but you can use named volumes for that. Despite host mounted volumes, named ones won't be empied. And you can access the dir. See the example:
docker volume create --name data
docker run -rm=true -v data:/etc ubuntu:trusty
docker volume inspect data
[
{
"Name": "data",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/data/_data",
"Labels": {},
"Scope": "local"
}
]
See the mount point?
mkdir ~/data
sudo -s
cp -r /var/lib/docker/volumes/data/_data/* ~/data
echo "Hello World">~/data/hello.txt
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/fstab #The content is preserved
docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/hello.txt #And your changes too
It is not exactly you were asking for but depends on your needs it works
Regards
If your goal is to provide a ready to go LAMP, you should use the VOLUMES declaration inside the Dockerfile.
VOLUME volume_path_in_container
The problem is that docker will not mount the file cause they were already present in the path you are creating the volume on. You can go as #Grif-fin said in his comment or modify the entry point of the container so he copy the file you want to expose to the volume at the run time.
You have to insert your datas using the build COPY or ADD command in Dockerfile so the base files will be present in the container.
Then create an entrypoint that will copy file from the COPY path to the volume path.
Then run the container using the -v tag and like -v local_path:volume_path_in_container. Like this, you should have the files inside the container mounted on the local. (At least, it was what I add).
Find an exemple here : https://github.com/titouanfreville/Docker/tree/master/ready_to_go_lamp.
It will avoid to have to build every time and you can provide it from a definitive image.
To be nicer, it would be nice to add an user support so you are owner of the mounted files (if you are not root).
Hope it was useful to you.

Mount data volume to docker with read&write permission

I want to mount a host data volume to docker. But the container should have read and write permission to it, meantime, any changes on the data volumes should not affect the data in host.
I can image a solution that mount several data volumes to single folder, one is read only another is read and write. But only this second '-v' works in my command,
docker run -ti --name build_cent1 -v /codebase/:/code:ro -v /temp:/code:rw centos6:1.0 bash
only this second '-v' works in my command,
That might be because both -v options attempt to mount host folders on the same container destination folder /code.
-v /codebase/:/code:ro
^^^^^
-v /temp:/code:rw
^^^^^
You could mount those host folders in two separate folders within /code.
As in:
-v /codebase/:/code/base:ro -v /temp:/code/temp:rw.
Normally in this case I think you ADD the folder to the Docker image, so that any container running it will have it in its (writeable) filesystem, but writes will go to a different layer.
You need to write a Dockerfile in the folder above the one you wish to use, which should look something like this:
FROM my/image
ADD codebase /codebase
Then you build the container using docker build -t some-name <path>. These steps could be added to the build scripts of your app (maybe you will find some plugin to help there). Then you can docker run some-name.
The downside is that there is one copy to do and the image creation, but should you launch many containers they will share the same copy of the layer in read-only and write their own modifications to independent layers above.
Got one answer from nixun in github.
you can simply use overlayfs to fix this:
mount -t overlay overlay \
-olowerdir=/codebase,upperdir=/temp,workdir=/workdir /codebase_new
docker run -ti --name build_cent1 -v /codebase_new:/code:rw centos6:1.0 bash
This solution has a good flexibility. Create image with share folder would be a solution, but it cannot update folder data easily.
This answer is not for docker users but it will help anyone who uses Lima to manage their containers.
I was stuck trying to solve the issue with limactl and lima nerdctl . I thought it is worth sharing the fix so that it may help anyone in the community who's using lima instead of docker:
By default Lima mounts volumes as read only. to be make them writeable by default do the following:
Edit the file and set write: true under mount section
$ vim ~/.lima/default/lima.yaml
then restart lima
limactl list #this lists all running vms
limactl stop default #or name of the machine
limactl start default #or name of the machine
you would still need to specify mount options exactly as with docker
lima nerdctl run -ti --name build_cent1 \
-v /codebase/:/code/base:ro \
-v /temp:/code/temp:rw \
centos6:1.0 bash
For more information about lima, please check this out

Resources