Tracks stops working when volume is specified - linux

I am trying to get the Tracks Docker image to work.
When I run the given command docker run -d --name=tracks -p 80:80 staannoe/tracks, everything goes fine. However, if I add a volume with docker run -d --volume /srv/tracks:/var/www --name=tracks -p 80:80 staannoe/tracks then suddenly it breaks: After doing the docker run, when I point my browser to the Tracks URL, all I get is a 404 error.
I noticed that /srv/tracks is always empty as well, unlike /var/www in the volume-less case. Notably, docker logs reveals that when I specify the volume, I get:
AH00112: Warning: DocumentRoot [/var/www/tracks/public] does not exist
I also get this error even if I manually create /srv/tracks/public. What is the problem?
EDIT: I am no longer sure if permissions are the problem. I did sudo chmod 777 /srv/tracks and I still get the same error. I also tried to sudo chgrp 33 /srv/tracks (33 is apparently www-data; by default the directory is owned by root:root) and this still didn't solve it.

The folder /var/www seems to contain data that is needed by your app. When you mount /srv/tracks as a volume to /var/www, its contents get hidden by the contents ov /srv/tracks.
According to your warning message:
AH00112: Warning: DocumentRoot [/var/www/tracks/public] does not exist
Probably you want to do something like this instead and place the tracks folder as a subfolder blow www instead of overwriting the whole www folder:
docker run ... -v /srv/tracks:/var/www/tracks ...
Otherwise: if you need to keep the /srv/www files but use an other folder to replace www you could place the tracks files into a different folder like /bootstrap during the Dockerfile build. And during container startup you simply copy the /bootstrap files to the /srv/www by providing your own little startup script.
EDIT: Make sure you do not mount an empty tracks folder into /var/www/tracks. In that folder there are some files expected. When they are not found you get an HTTP 404. Those folders are required on top level:
$ docker exec -it tracks ls /var/www/tracks
COPYING README.md bin db lib public vendor
Gemfile Rakefile config doc log test
Gemfile.lock app config.ru features mkdocs.yml tmp
Make sure that your custom tasks folder that you use as a volume is based on the original images contents.

Related

Change default ignore rules when using az acr build

I am trying to access my git repo and perform some git commands inside the docker machine I built using az acr build. The machine builds correctly. But I get these warnings:
WARNING: The login server endpoint suffix '.azurecr.io' is automatically omitted.
WARNING: Packing source code into tar to upload...
WARNING: Excluding '.git' based on default ignore rules
WARNING: Excluding '.gitignore' based on default ignore rules
And as you would expect, later when I try and copy files from my working directory to the docker machine via the COPY task, the .git folder is not there. It seems like the default ignore settings for az acr build ignores the .git folder when it builds the docker machine. I checked the documentation here: https://learn.microsoft.com/en-us/cli/azure/acr?view=azure-cli-latest#az-acr-build
It doesnt seem to say anything about changing the default ignore rules. Any ideas how I could change this?
For reference here is the az command I run:
az acr build \
-t $(docker_repository_name)/$(docker_image_name):$(Build.BuildNumber) \
--registry $(docker_registry_host) \
--file src/Dockerfile /home/vsts/work/1/s/ \
--platform $(docker_platform)
And here's the part of my dockerfile that tries to copy the .git folder:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
COPY [".git", "test/.git/"]
RUN ls test -la
COPY ["/", "test/"]
RUN ls test -la
Its worth noting I am not ingoring it in .dockerignore
Figured out how to do it! Your dockerignore file needs to reference .git you have to include !.git/** - so that it knows .git is defintely not meant to be removed from the context.

inside container container file permission issue for non root user

I am extending a docker image of a program from here and I want to change some configs and create my own docker image. I have written a Dockerfile as follows and replaced the server.xml file in this image:
FROM exoplatform/exo-community
COPY server.xml /opt/exo/conf
RUN chmod 777 /opt/exo/conf/server.xml
When I created the docker image and run an instance from the image, the running program of the container cannot access the file server.xml because its owner is the root user and I see the permission denied error. I tried to change the ownership in the Dockerfile by chmod command but I see the Operation not permitted error. The user of the running container is not the root user and it cannot access the server.xml file that is owned by the root user. How can I resolve this issue?
If this is actually just a config file, I wouldn't build a custom image around it. Instead, use the docker run -v option to inject it at runtime
docker run \
-v $PWD/server.xml:/opt/exo/conf/server.xml \
... \
exoplatform/exo-community
(You might still hit the same permission issues.)
In your Dockerfile approach, the base image runs as an alternate USER but a COPY instruction by default makes files owned by root. As of relatively recent Docker (18.03; if you're using Docker 1.13 on CentOS/RHEL 7 this won't work) you should be able to
COPY --chown=exo server.xml /opt/exo/conf
Or if that won't work, you can explicitly switch to the root user and back
COPY server.xml /opt/exo/conf
USER root
RUN chown exo /opt/exo/conf/server.xml
USER exo

How to change user of docker service?

I'm having problem because i've installed & started docker as a "bad_user". The problem is that this container generates static files (its jekyll/jekyll image), and those files are owned by "bad_user" so i cannot edit them (i know i could add myself to bad_user group or own the dir by chown -R but it would be painful to do every time, and it just bugs me :).
I have tried to reinstall docker & removing /etc/docker directory without any effect. Every time i reinstall it (docker service/manager) runs as "bad_user" and overwrites directory owner.
My question is:
Would that be possible to make docker running under "docker" user ? I have already created that user with that group (yes, i have reinstalled docker-ce under that user already).
Im working on debian-based distro.
I guess in my case its docker daemon issue, somehow when its syncrhonizing shared volume files it gives permission to bad_user instead of user who is running container.
PS. This is the command i run if that matters:
docker run --rm -p 8000:8000 \
--volume="/home/docker/blog:/srv/jekyll" \
-it tocttou/jekyll:3.5 \
jekyll serve --watch --port 8000
Okay i figured it out. It turns out that when you run linux container that creates some files on the shared volume (the -v argument makes shared volume), the file permissions will be for user with grup id = 1000 and id = 1000. In my case user with id=1000 was "bad_user". If you want to workaround that you can use --user and specify user id that you're running under.
The key is to remember that linux permissions are just numbers, for host filesystem number 1000 is (in my case) "bad_user" and 10001 is "docker_user". If you check permissions from inside of the container you'll might see that user id = 1000 means very different user than on your host system.
I hope that next people who will encounter this issue will find that userful.
You can find more information here: https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/

Sharing a single file from host machine with Docker Container and having the Container r+w to same file

I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew
Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes

Shared volume/file permissions/ownership (Docker)

I'm having a slightly annoying issue while using a Docker container (I'm on Ubuntu, so no virtualization like VMWare or b2d). I've built my image, and have a running container that has one shared (mounted) directory from my host, and one shared (mounted) file from my host. Here's the docker run command in full:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/components \
-v $(pwd)/index.php:/var/www/index.php \
my-image
This works great, and both /components (and its contents) and the file are shared appropriately. However, when I want to make changes to either the directory (e.g. adding a new file or folder), or edit the mounted file (or any file in the directory), I'm unable to do so due to incorrect permissions. Running ls- lFh shows that the owner and group for the mounted items have been changed to libuuid:libuuid. Modifying either the file or parent directory requires root permissions, which impedes my workflow (as I'm working from Sublime Text, not Terminal, I'm presented with a popup for admin privs).
Why does this occur? How can I work around this / handle this properly? From Managing Data Volumes: Mount a Host File as a Data Volume:
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device or resource busy”. In the case where you want to edit the mounted file, it is often easiest to instead mount the parent directory.
This would seem to suggest that instead of mounting /components and /index.php, I should instead mount the parent directory of both. Sounds great in theory, but based on the behavior of the -v option and how it interacts with /directory, it would seem that every file in my parent directory would be altered to be owned by libuuid:libuuid. Additionally, I have lots of things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Mounting the whole parent directory would seem to be wasteful.
Running chown user:group on /components and /index.php on my host machine allow me to work around this and seem to continue to sync with the container. Is this something I'll need to do every time I run a container with mounted host volumes? I'm guessing that there is a more efficient way to do this, and I'm just not finding an explanation for my particular use-case anywhere.
I am using this container for development of a module for another program, and have no desire to manage a data-only container - the only files that matter are from my host; persistence isn't needed elsewhere (like a database, etc).
Dockerfile
/setup
Created on pastebin to avoid an even longer post. Never expires.
After creating the image, this is the run command I'm using:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \
-v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \
my-image
It looks like your chown -R nginx:nginx ... commands inside your container are changing the ownership bits on your files to be owned by libuuid on your host machine.
See Understanding user file ownership in docker: how to avoid changing permissions of linked volumes for a basic explanation on how file ownership bits work between your host and your docker containers.

Resources