How do I export a lxc container? - linux

Is it possible to export a lxc container to another machine? There are some tools like lxc-snap and lxc-clone will those help in exporting to another system? Can I copy paste the rootfs folder or is there a standard way of doing this?
Edit: I ended chrooting into the /var/lib/lxc/centos folder and getting things done as that was sufficient for my use case. However make sure that you chroot according to the host system architecture i.e use linux32 chroot if the host is a 64 bit machine but your installation is 32 bit.

Did you try:
Turning a container into an image
The easiest way by far to build an image with LXD is to just turn a container into an image.
This can be done with:
lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image
You can even turn a past container snapshot into a new image:
lxc publish my-container/some-snapshot --alias some-image
From https://insights.ubuntu.com/2016/04/01/lxd-2-0-image-management-512/

the default container location is:
/var/lib/lxc/cn_name
You can lxc-stop the container then copy the cn_name contents to the same location in the new machine using typical tools such as scp, rsync etc.
You can tar up the entire cn_name directory also then untar on the new machine in the same directory
Just remember whatever you do ... preserver owner etc attributes for files & directories you copy so they are still runnable on the new system.

Related

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

Read-only file system - LXC

I'm trying to demonstrate an ROP attack and keep getting a "Read-only file system" error on my LXC container.
I'm trying to execute the command:
echo "0" > /proc/sys/kernel/randomize_va_space
The following is returned:
bash: /proc/sys/kernel/randomize_va_space: Read-only file system
Any help is appreciated.
If this is still relevant, you can't change such settings in a Container. Docker doesn't allow this.
Docker does not support changing sysctls inside of a Container that also modify the host system.
Therefore you have to change the settings "outside" of your Container, on your host system.
Just run the same commands on your host system and then create your Container. The File randomize_va_space in your Container should automatically have your given value 0 in it!

Docker-compose volume set host Path

I've created a simple example where i'm trying to achive a simple task. I need to mount a specific folder inside my container to a specific folder on the host machine.
The dockerfile that create the image i use in my docker-compose is the following:
FROM ubuntu:bionic
COPY ./file.txt /vol/file.txt
In this folder, on the container side, i copy a file called file.txt, and i need this behaviour on the host machine:
If the Host machine folder is empty or not exising or the file is missing, i need to copy inside that the file.txt file.
If the Host machine folder has already a file with that name, the file is preserved and not overriden.
This is exactly what happens when volume is used inside docker-compose. The problem with this is that you can't choose the folder on the host machine, but you can simply assign to the volume a name and Docker will create it inside one of his folders.
Instead using mount point lets you choose the folder on the host machine, but here the behaviour is different: even if the folder on the host machine is empty or not exising, the file file.txt is deleted on the container.
A pratical example of this is a common application distributed with docker. I start from a dotnet image and copy inside it my application that will start on container load. Inside this application i have a configuration file that i need to preserve on image updates, so i need to copy it on Host machine, and this file has to be editable by the user that use the host machine, so it can't be on a subfolder of docker installation with a random name.
There's a way to achieve this?

Sharing a single file from host machine with Docker Container and having the Container r+w to same file

I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew
Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes

Shared volume/file permissions/ownership (Docker)

I'm having a slightly annoying issue while using a Docker container (I'm on Ubuntu, so no virtualization like VMWare or b2d). I've built my image, and have a running container that has one shared (mounted) directory from my host, and one shared (mounted) file from my host. Here's the docker run command in full:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/components \
-v $(pwd)/index.php:/var/www/index.php \
my-image
This works great, and both /components (and its contents) and the file are shared appropriately. However, when I want to make changes to either the directory (e.g. adding a new file or folder), or edit the mounted file (or any file in the directory), I'm unable to do so due to incorrect permissions. Running ls- lFh shows that the owner and group for the mounted items have been changed to libuuid:libuuid. Modifying either the file or parent directory requires root permissions, which impedes my workflow (as I'm working from Sublime Text, not Terminal, I'm presented with a popup for admin privs).
Why does this occur? How can I work around this / handle this properly? From Managing Data Volumes: Mount a Host File as a Data Volume:
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device or resource busy”. In the case where you want to edit the mounted file, it is often easiest to instead mount the parent directory.
This would seem to suggest that instead of mounting /components and /index.php, I should instead mount the parent directory of both. Sounds great in theory, but based on the behavior of the -v option and how it interacts with /directory, it would seem that every file in my parent directory would be altered to be owned by libuuid:libuuid. Additionally, I have lots of things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Mounting the whole parent directory would seem to be wasteful.
Running chown user:group on /components and /index.php on my host machine allow me to work around this and seem to continue to sync with the container. Is this something I'll need to do every time I run a container with mounted host volumes? I'm guessing that there is a more efficient way to do this, and I'm just not finding an explanation for my particular use-case anywhere.
I am using this container for development of a module for another program, and have no desire to manage a data-only container - the only files that matter are from my host; persistence isn't needed elsewhere (like a database, etc).
Dockerfile
/setup
Created on pastebin to avoid an even longer post. Never expires.
After creating the image, this is the run command I'm using:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \
-v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \
my-image
It looks like your chown -R nginx:nginx ... commands inside your container are changing the ownership bits on your files to be owned by libuuid on your host machine.
See Understanding user file ownership in docker: how to avoid changing permissions of linked volumes for a basic explanation on how file ownership bits work between your host and your docker containers.

Resources