Read-only file system - LXC - linux

I'm trying to demonstrate an ROP attack and keep getting a "Read-only file system" error on my LXC container.
I'm trying to execute the command:
echo "0" > /proc/sys/kernel/randomize_va_space
The following is returned:
bash: /proc/sys/kernel/randomize_va_space: Read-only file system
Any help is appreciated.

If this is still relevant, you can't change such settings in a Container. Docker doesn't allow this.
Docker does not support changing sysctls inside of a Container that also modify the host system.
Therefore you have to change the settings "outside" of your Container, on your host system.
Just run the same commands on your host system and then create your Container. The File randomize_va_space in your Container should automatically have your given value 0 in it!

Related

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

setcap cap_net_admin in linux containers prevents user access to every file

I have a tcpdump application in a CentOS container. I was trying to run tcpdump as nonroot. Following this forum post: https://askubuntu.com/questions/530920/tcpdump-permissions-problem (and some other documentation that reinforced this), I tried to use setcap cap_net_admin+eip /path/to/tcpdump in the container.
After running this, I tried to run tcpdump as a different user (with permissions to tcpdump) and I got "Operation Not Permitted". I then tried to run it as root which had previously been working and also got, "Operation Not Permitted". After running getcap, I verified that the permissions were what they should be. I thought it may be my specific use case so I tried running the setcap command against several other executables. Every single executable returned "Operation Not Permitted" until I ran setcap -r /filepath.
Any ideas on how I can address this issue, or even work around it without using root to run tcpdump?
The NET_ADMIN capability is not included in containers by default because it could allow a container process to modify and escape any network isolation settings applied on the container. Therefore explicitly setting this permission on a binary with setcap is going to fail since root and every other user in the container is blocked from that capability. To run a container with this, you would need to add this capability onto the container with the command used to start your container. e.g.
docker run --cap-add NET_ADMIN ...
However, I believe all you need is NET_RAW (setcap cap_net_raw) which is included in the default capabilities. From man capabilities:
CAP_NET_RAW
* Use RAW and PACKET sockets;
* bind to any address for transparent proxying.

docker uid/gid mapping changes on different host

I have a docker container build on one system where uid = 1000 and gid = 300
When I then try to use that same container on the jenkins server these uid/gid are taken by different user and groups so when I try to execute commands in the container it fails due to permissions.
How can I run the container with the users mapped to the correct uid/gid
There are a few options to handle uid/gid mismatches with host volumes.
open permissions up on the host directory, often considered bad by the security team, for good reason, but also really easy to do.
build a new image for each host where it may run with different uid/gid
switch to named volumes instead. If you don't need direct access to the volume contents from the host, this is recommended.
fix permission problems with an entrypoint.
I go into more detail on some of these in my DockerCon talk here.
There's also another example of the entrypoint solution where I configured a Jenkins container to fix the docker gid to match the mounted docker socket: https://github.com/sudo-bmitch/jenkins-docker

Sharing a single file from host machine with Docker Container and having the Container r+w to same file

I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew
Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes

How do I export a lxc container?

Is it possible to export a lxc container to another machine? There are some tools like lxc-snap and lxc-clone will those help in exporting to another system? Can I copy paste the rootfs folder or is there a standard way of doing this?
Edit: I ended chrooting into the /var/lib/lxc/centos folder and getting things done as that was sufficient for my use case. However make sure that you chroot according to the host system architecture i.e use linux32 chroot if the host is a 64 bit machine but your installation is 32 bit.
Did you try:
Turning a container into an image
The easiest way by far to build an image with LXD is to just turn a container into an image.
This can be done with:
lxc launch ubuntu:14.04 my-container
lxc exec my-container bash
<do whatever change you want>
lxc publish my-container --alias my-new-image
You can even turn a past container snapshot into a new image:
lxc publish my-container/some-snapshot --alias some-image
From https://insights.ubuntu.com/2016/04/01/lxd-2-0-image-management-512/
the default container location is:
/var/lib/lxc/cn_name
You can lxc-stop the container then copy the cn_name contents to the same location in the new machine using typical tools such as scp, rsync etc.
You can tar up the entire cn_name directory also then untar on the new machine in the same directory
Just remember whatever you do ... preserver owner etc attributes for files & directories you copy so they are still runnable on the new system.

Resources