docker-render vs docker file changing permission - linux

I am able to change permission of a file when I do docker-compose exec app <change file permission command>
However if I try to do from docker file it errors out saying it's only readonly file system.
Changing file inside /etc I know they are mounted on runtime wanted to know if its possible to do from dockerfile.

You can try to add a script in your Dockerfile that you copy into the image and run when the container starts. You are facing this error because images are, indeed, read-only file systems, while containers are mutable. So, adding a script should fix the problem.

Related

How to connect paths in Docker file

I am running a Jenkins job, inside a docker container, this job requires doxygen but im getting an error saying -
[exec] /bin/sh: /opt/fc4-usr-local/bin/doxygen: No such file or directory
I have doxygen installed in my Docker image, but the path is -
usr/bin/doxygen
Inside my docker image, I want to connect this old path - /opt/fc4-usr-local/bin/doxygen with new path usr/bin/doxygen
So whenever my job looks for doxygen it goes to new path usr/bin/doxygen.
Note 1. The reason I cant just edit my job to look for doxygen in the new path is that its files are locked and im not allowed to changes them,
Note 2. So my idea is that, when my Jenkins job look for doxygen in my docker container it goes straight for new path not the old one.
Could anyone please suggest any ideas for this.
Add these lines near the bottom of your Dockerfile:
RUN mkdir -p /opt/fc4-usr-local/bin
RUN ln -s /usr/bin/doxygen /opt/fc4-usr-local/bin/doxygen
The first line creates the directory.
The second line creates a symlink from one path to the other

Docker won't copy files from the container to the host's /tmp folder

I am trying to copy a file from a linux container to a linux host using docker cp. I want to copy this file to the /tmp folder on the host machine.
The problem is simple: I can copy to other places, such as my home folder. For example, this works:
docker cp my_container:/certificate.cer /home/adam/Documents/certificate.cer
But this does not work:
docker cp my_container:/certificate.cer /tmp/certificate.cer.
However, the command completes with a zero exit code as if the operation was successful. I get no error feedback, but the file definitely isn't there.
Am I missing something, or is this a bug with the Docker CLI?
edit: From further testing I have noticed that creating a new directory in /tmp, (i.e.
mkdir /tmp/test) Then trying to copy the file into that subfolder, fails with an error: stat /tmp/test/: not a directory.
This seems to indicate that perhaps docker is looking at a different folder? I am not sure where it could be looking though.
Thanks
I believe I have found the answer to this:
Docker was installed as an Ubuntu Snap, which as I understand, is sandboxed. Running sudo ls /tmp/snap.docker/tmp showed me all the files I was missing.
So, it seems the snap version of docker works a little differently than expected. Uninstalling it and reinstalling from apt fixed the problem. :)

chroot process jail with arbitrary directory set as root on each run

I am trying to run a command that needs to be limited to one directory and is executed in a shell function from a web application.
My goal is to run that program but limit it to one directory. This directory will change each time I want to run the program and multiple instances need to be able to run on different directories at the same time.
I have looked at chroot and it seems that a file system needs to be explicitly created each time. I am looking for a more temporary solution that accepts the desired root directory and dose not require me to copy files all over the place or do weird mounting of things.
What you most likely want is containers.
A container takes milliseconds to start, and creates what is basically a complete chroot jail every time it runs. A command like docker run --rm --volume /var/chroot/jail/whatever:/workdir ubuntu stat /workdir will run stat on the chroot jail directory, with all environment being the latest Ubuntu release. It will then scrap the chroot jail. Running it again will create a whole new jail.
You will need to build your web application as a docker image on top of whatever jail you need (Ubuntu, CentOS etc.), which means adjusting your build system to create such an image.

How to connect two containers when one of the container is mount to the filesystem?

Container 1 : I have a docker container which is spawned by a Dockerfile image that i built serving Python dependencies to a specific use-case. This container is mount to my file system project directory to /source directory in the Docker container.
Container 2 : Another container contains Jupyter notebook and it is only configured to use Jupyter notebook.
I don't use Jupyter notebook all the time for Python developement, i use my code editor instead. I just want an easier way to mount Jupyter notebook when i want to and edit my project files.
Now, can i mount Container-2 to Container-1 such that contents in the /source directory in Container-1 is persisted to my project directory while enabling Container-1 and Conatiner-2 to take up a shared space? In my case i want Container-1 to be mount to my file system to /source and I want the Jupyter Notebook, running in Container-2 to be able to edit files inside the /source directory in Container-1.
if I'm understanding correctly, what you are trying to do is using one container installed libs in another container
I think that this approach is bad, since there might be some problems with OS level dependencies that are missing. The first possible solution is installing the dependencies in both containers, or at least using the 1st docker image as base to the 2nd
If you yet rather doing it your way, you can try mounting volume between the 1st container virtual env / site-packages with the 2nd

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

Resources