Docker-compose volume set host Path - linux

I've created a simple example where i'm trying to achive a simple task. I need to mount a specific folder inside my container to a specific folder on the host machine.
The dockerfile that create the image i use in my docker-compose is the following:
FROM ubuntu:bionic
COPY ./file.txt /vol/file.txt
In this folder, on the container side, i copy a file called file.txt, and i need this behaviour on the host machine:
If the Host machine folder is empty or not exising or the file is missing, i need to copy inside that the file.txt file.
If the Host machine folder has already a file with that name, the file is preserved and not overriden.
This is exactly what happens when volume is used inside docker-compose. The problem with this is that you can't choose the folder on the host machine, but you can simply assign to the volume a name and Docker will create it inside one of his folders.
Instead using mount point lets you choose the folder on the host machine, but here the behaviour is different: even if the folder on the host machine is empty or not exising, the file file.txt is deleted on the container.
A pratical example of this is a common application distributed with docker. I start from a dotnet image and copy inside it my application that will start on container load. Inside this application i have a configuration file that i need to preserve on image updates, so i need to copy it on Host machine, and this file has to be editable by the user that use the host machine, so it can't be on a subfolder of docker installation with a random name.
There's a way to achieve this?

Related

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

Access shared folder using a specific windows credential

I'm currently working with a requirement: download a file from the database then write it to a shared folder. Temporarily, I'm working on a path on my local:
File.WriteAllBytes(path, content);
My problem is the shared folder is on a windows machine and only a specific account will be allowed to write to this folder.
Now I know the basics of Impersonation but I don't know if it is possible to impersonate on a Docker container on a Linux machine.
In short, I want to deploy my application on a Linux container then write a file to a windows shared folder with limited access.
Is the folder on the host or mounted on the host? If so you can then map the host folder to the container. e.g.
C:\> "Hello" > c:\temp\testfile.txt
C:\> docker run -v c:/temp:/tmp busybox cat /tmp/testfile.txt
c:/temp being a local path on the host
/tmp being the path in the container.
More details here: volume-shared-filesystems

Ansible doesn't work when user home directory mounted on NFS volume

Here is the situation. I got a number of hosts that I'd like to maintain via Ansible. The baseline configuration of the hosts like logins/users/etc is controlled by corporate IT overlords, so I can only change things that are related to application not general host setup. Some of the tasks related to application require running as 'root' or some other privileged user.
I do have a password-less sudo access on all the hosts, however all user home directories are located on NFS mounted volume. From my understanding how ansible works it first logs in into the target host as a regular user and places some files into $HOME/.ansible directory, then it switches to root user using sudo and tries to run the stuff from that directory.
But here is the problem. As I mentioned above the home directories are on NFS volume, so after ansible process on the target machine becomes root it can no longer access the $HOME/.ansible directory anymore due to NFS restrictions. Is there a way to tell ansible to put these work files outside of home directory on some non-NFS volume.
There were two parameters for the ansible.cfg configuration file introduced in Ansible 2.1 which allow specifying the location of temporary directory on target and control machines:
remote_tmp
Ansible works by transferring modules to your remote machines, running them, and then cleaning up after itself. In some cases, you may not wish to use the default location and would like to change the path. You can do so by altering this setting:
remote_tmp = ~/.ansible/tmp
local_tmp
When Ansible gets ready to send a module to a remote machine it usually has to add a few things to the module: Some boilerplate code, the module’s parameters, and a few constants from the config file. This combination of things gets stored in a temporary file until ansible exits and cleans up after itself. The default location is a subdirectory of the user’s home directory. If you’d like to change that, you can do so by altering this setting:
local_tmp = $HOME/.ansible/tmp

Sharing a single file from host machine with Docker Container and having the Container r+w to same file

I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew
Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes

How do I export a lxc container?

Is it possible to export a lxc container to another machine? There are some tools like lxc-snap and lxc-clone will those help in exporting to another system? Can I copy paste the rootfs folder or is there a standard way of doing this?
Edit: I ended chrooting into the /var/lib/lxc/centos folder and getting things done as that was sufficient for my use case. However make sure that you chroot according to the host system architecture i.e use linux32 chroot if the host is a 64 bit machine but your installation is 32 bit.
Did you try:
Turning a container into an image
The easiest way by far to build an image with LXD is to just turn a container into an image.
This can be done with:
lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image
You can even turn a past container snapshot into a new image:
lxc publish my-container/some-snapshot --alias some-image
From https://insights.ubuntu.com/2016/04/01/lxd-2-0-image-management-512/
the default container location is:
/var/lib/lxc/cn_name
You can lxc-stop the container then copy the cn_name contents to the same location in the new machine using typical tools such as scp, rsync etc.
You can tar up the entire cn_name directory also then untar on the new machine in the same directory
Just remember whatever you do ... preserver owner etc attributes for files & directories you copy so they are still runnable on the new system.

Resources