chroot process jail with arbitrary directory set as root on each run - linux

I am trying to run a command that needs to be limited to one directory and is executed in a shell function from a web application.
My goal is to run that program but limit it to one directory. This directory will change each time I want to run the program and multiple instances need to be able to run on different directories at the same time.
I have looked at chroot and it seems that a file system needs to be explicitly created each time. I am looking for a more temporary solution that accepts the desired root directory and dose not require me to copy files all over the place or do weird mounting of things.

What you most likely want is containers.
A container takes milliseconds to start, and creates what is basically a complete chroot jail every time it runs. A command like docker run --rm --volume /var/chroot/jail/whatever:/workdir ubuntu stat /workdir will run stat on the chroot jail directory, with all environment being the latest Ubuntu release. It will then scrap the chroot jail. Running it again will create a whole new jail.
You will need to build your web application as a docker image on top of whatever jail you need (Ubuntu, CentOS etc.), which means adjusting your build system to create such an image.

Related

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

What is the file-system of a Docker container? On which file system does an application running inside this container runs on?

Basically, I am running Docker on my Windows 10 machine. I have mounted a windows directory inside this container to access the files on my windows machine, on which a couple of tasks are to be performed.
Which file system runs inside a docker container?
Is it the same as that of the OS on which this container is based? For instance, if I run a container having ubuntu as base OS, will it be the one that the current version of ubuntu (inside this container is running)?
Or is it the one that is running on the docker daemon?
Also, I am running an application inside this container, which accesses files in my windows directory as well as creates a couple of files. Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
So, how does it work? (Different file system inside the docker container and the windows file system; both in conjunctuion?)
Which file system runs inside a docker container?
The one from the docker host (Windows NTFS or Ubuntu FS).
$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /opt/webapp.
If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content.
Once the mount is removed, the content is accessible again.
Now, these files are being written to Windows, hence follow the file system of Windows(NTFS).
Yes, and that filesystem is case-sensitive (as illustrated in 18756).

How to create a directory in /run for each Supervisor program?

I have an Ubuntu 14.04 LTS server running a few different programs under Supervisor. Many of the programs need to store sockets and other named pipes on the filesystem, and /run seems like the ideal choice for these types of files. Unfortunately, /run is tmpfs and removed on every reboot, and root privileges are needed to (re)create the directories that each program can write to.
I need a way to create a few subdirectories in /run and set the owner/mode to something that each program can work with, and do so on each reboot before Supervisor tries to start them. It does not look like Supervisor supports a mechanism to run pre-start commands before it starts a program.
Most other answers for this type of question suggest doing it in the init script, but that belongs to Supervisor's package and I do not want to mess with it (or have to maintain it when it changes upstream).
If this machine had Systemd it seems like I could use /etc/tmpfiles.d, but it does not.
The best idea I came up with was to use a separate Upstart pre-start script for each program that only creates the directories without actually launching any processes. Something like:
/etc/init/myapp1.conf
start on runlevel [2345]
pre-start script
mkdir -p -m 0755 /var/run/myapp1
chown app1user: /var/run/myapp1
end script
...without any exec line. I'm not 100% sure this is valid or sane, but it appears to work. Are there cleaner ways to do something like this?
Do you run your apps under supervisor under a specific user? Because by default applications are run with root as owner.
What I would do is a simple script which does the following:
Checks if the required files/folders are created.
Sets the owner if necessary.
Then starts your application
Put this script into your supervisor config instead of directly starting your application. Make sure that it is run with root (remove user from the config or set user=root).
This way you can always make sure that your environment is set up and your directories exist. So if you clear the tempfs for some reasons, your scripts will still run without reboot.
If you NEED to run your applications under a specific user, you can do the following:
Move the first 2 points into a separate setup script (as you would do now using your solution).
Create another script which calls your setup script with sudo and starts your application
Add your custom user and script to the sudo file so that your user can call that script as root without a password prompt. (Be aware: this is a security risk, if someone gets access to your server. Make sure that your setup script is NOT writable)

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Can oprofile be made to use a directory other than /root/.oprofile?

We're trying to use oprofile to track down performance problems on a server cluster. However, the servers in question have a read-only file system, where /var/tmp is the only writeable directory.
OProfile wants to create two directories whenever it runs: /root/.oprofile and /var/lib/oprofile, but it can't, because the filesystem is read-only. I can use the --session-dir command line option to make it write its logs to elsewhere than /var/lib, but I can't find any such option to make it use some other directory than /root/.oprofile.
The filesystem is read-only because it is on nonwriteable media, not because of permissions -- ie, not even superuser can write to those directories. We can cook a new ROM image of the filesystem (which is how we installed oprofile, obviously), but there is no way for a runtime program to write to /root, whether it is superuser or not.
I tried creating a symlink in the ROM that points /root/.oprofile -> /var/tmp/oprofile, but apparently oprofile doesn't see this symlink as a directory, and fails when run:
redacted#redacted:~$ sudo opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
mkdir: cannot create directory `/root/.oprofile': File exists
Couldn't mkdir -p /root/.oprofile
We must run our profilers on this particular system, because the performance issues we're trying to investigate don't manifest if we build and run the app on a development server. We can't just run our tests on a programmer's workstation and profile the app there, because the problem doesn't happen there.
Is there some way to configure oprofile so that it doesn't use /root ?
I guess it should be as simple as overriding the HOME environment variable:
HOME=/tmp/fakehome sudo -E opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
If that doesn't work out, you could have a look at
unionfs
aufs
to create a writable overlay. You might even just mount tmpfs on /root,or something simple like that.
It turns out that this directory is hardcoded into the opcontrol bash script:
# location for daemon setup information
SETUP_DIR="/root/.oprofile"
SETUP_FILE="$SETUP_DIR/daemonrc"
Editing those lines seemed to get it working, more or less.

Resources