Sharing a single file from host machine with Docker Container and having the Container r+w to same file - linux

I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew

Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes

Related

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

Docker-compose volume set host Path

I've created a simple example where i'm trying to achive a simple task. I need to mount a specific folder inside my container to a specific folder on the host machine.
The dockerfile that create the image i use in my docker-compose is the following:
FROM ubuntu:bionic
COPY ./file.txt /vol/file.txt
In this folder, on the container side, i copy a file called file.txt, and i need this behaviour on the host machine:
If the Host machine folder is empty or not exising or the file is missing, i need to copy inside that the file.txt file.
If the Host machine folder has already a file with that name, the file is preserved and not overriden.
This is exactly what happens when volume is used inside docker-compose. The problem with this is that you can't choose the folder on the host machine, but you can simply assign to the volume a name and Docker will create it inside one of his folders.
Instead using mount point lets you choose the folder on the host machine, but here the behaviour is different: even if the folder on the host machine is empty or not exising, the file file.txt is deleted on the container.
A pratical example of this is a common application distributed with docker. I start from a dotnet image and copy inside it my application that will start on container load. Inside this application i have a configuration file that i need to preserve on image updates, so i need to copy it on Host machine, and this file has to be editable by the user that use the host machine, so it can't be on a subfolder of docker installation with a random name.
There's a way to achieve this?

Tracks stops working when volume is specified

I am trying to get the Tracks Docker image to work.
When I run the given command docker run -d --name=tracks -p 80:80 staannoe/tracks, everything goes fine. However, if I add a volume with docker run -d --volume /srv/tracks:/var/www --name=tracks -p 80:80 staannoe/tracks then suddenly it breaks: After doing the docker run, when I point my browser to the Tracks URL, all I get is a 404 error.
I noticed that /srv/tracks is always empty as well, unlike /var/www in the volume-less case. Notably, docker logs reveals that when I specify the volume, I get:
AH00112: Warning: DocumentRoot [/var/www/tracks/public] does not exist
I also get this error even if I manually create /srv/tracks/public. What is the problem?
EDIT: I am no longer sure if permissions are the problem. I did sudo chmod 777 /srv/tracks and I still get the same error. I also tried to sudo chgrp 33 /srv/tracks (33 is apparently www-data; by default the directory is owned by root:root) and this still didn't solve it.
The folder /var/www seems to contain data that is needed by your app. When you mount /srv/tracks as a volume to /var/www, its contents get hidden by the contents ov /srv/tracks.
According to your warning message:
AH00112: Warning: DocumentRoot [/var/www/tracks/public] does not exist
Probably you want to do something like this instead and place the tracks folder as a subfolder blow www instead of overwriting the whole www folder:
docker run ... -v /srv/tracks:/var/www/tracks ...
Otherwise: if you need to keep the /srv/www files but use an other folder to replace www you could place the tracks files into a different folder like /bootstrap during the Dockerfile build. And during container startup you simply copy the /bootstrap files to the /srv/www by providing your own little startup script.
EDIT: Make sure you do not mount an empty tracks folder into /var/www/tracks. In that folder there are some files expected. When they are not found you get an HTTP 404. Those folders are required on top level:
$ docker exec -it tracks ls /var/www/tracks
COPYING README.md bin db lib public vendor
Gemfile Rakefile config doc log test
Gemfile.lock app config.ru features mkdocs.yml tmp
Make sure that your custom tasks folder that you use as a volume is based on the original images contents.

Shared volume/file permissions/ownership (Docker)

I'm having a slightly annoying issue while using a Docker container (I'm on Ubuntu, so no virtualization like VMWare or b2d). I've built my image, and have a running container that has one shared (mounted) directory from my host, and one shared (mounted) file from my host. Here's the docker run command in full:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/components \
-v $(pwd)/index.php:/var/www/index.php \
my-image
This works great, and both /components (and its contents) and the file are shared appropriately. However, when I want to make changes to either the directory (e.g. adding a new file or folder), or edit the mounted file (or any file in the directory), I'm unable to do so due to incorrect permissions. Running ls- lFh shows that the owner and group for the mounted items have been changed to libuuid:libuuid. Modifying either the file or parent directory requires root permissions, which impedes my workflow (as I'm working from Sublime Text, not Terminal, I'm presented with a popup for admin privs).
Why does this occur? How can I work around this / handle this properly? From Managing Data Volumes: Mount a Host File as a Data Volume:
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device or resource busy”. In the case where you want to edit the mounted file, it is often easiest to instead mount the parent directory.
This would seem to suggest that instead of mounting /components and /index.php, I should instead mount the parent directory of both. Sounds great in theory, but based on the behavior of the -v option and how it interacts with /directory, it would seem that every file in my parent directory would be altered to be owned by libuuid:libuuid. Additionally, I have lots of things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Mounting the whole parent directory would seem to be wasteful.
Running chown user:group on /components and /index.php on my host machine allow me to work around this and seem to continue to sync with the container. Is this something I'll need to do every time I run a container with mounted host volumes? I'm guessing that there is a more efficient way to do this, and I'm just not finding an explanation for my particular use-case anywhere.
I am using this container for development of a module for another program, and have no desire to manage a data-only container - the only files that matter are from my host; persistence isn't needed elsewhere (like a database, etc).
Dockerfile
/setup
Created on pastebin to avoid an even longer post. Never expires.
After creating the image, this is the run command I'm using:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \
-v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \
my-image
It looks like your chown -R nginx:nginx ... commands inside your container are changing the ownership bits on your files to be owned by libuuid on your host machine.
See Understanding user file ownership in docker: how to avoid changing permissions of linked volumes for a basic explanation on how file ownership bits work between your host and your docker containers.

How do I export a lxc container?

Is it possible to export a lxc container to another machine? There are some tools like lxc-snap and lxc-clone will those help in exporting to another system? Can I copy paste the rootfs folder or is there a standard way of doing this?
Edit: I ended chrooting into the /var/lib/lxc/centos folder and getting things done as that was sufficient for my use case. However make sure that you chroot according to the host system architecture i.e use linux32 chroot if the host is a 64 bit machine but your installation is 32 bit.
Did you try:
Turning a container into an image
The easiest way by far to build an image with LXD is to just turn a container into an image.
This can be done with:
lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image
You can even turn a past container snapshot into a new image:
lxc publish my-container/some-snapshot --alias some-image
From https://insights.ubuntu.com/2016/04/01/lxd-2-0-image-management-512/
the default container location is:
/var/lib/lxc/cn_name
You can lxc-stop the container then copy the cn_name contents to the same location in the new machine using typical tools such as scp, rsync etc.
You can tar up the entire cn_name directory also then untar on the new machine in the same directory
Just remember whatever you do ... preserver owner etc attributes for files & directories you copy so they are still runnable on the new system.

Resources