I'm using docker-compose for start dockers. I have image, which have default config inside, but I want to provide my own config based on default by volumes. So I need to get a default, and have able to change it. When I write volumes like this volumes: - config.json:/etc/config.json it will create a folder config.json on host. How I should do this config?
On your first run, to get the default config from the etc folder. You can mount the entire etc folder. So all the files in the etc folder will be available in your local machine.
volumes:
- ./localetc:/etc
Now a new folder called localetc will be created in your current directory.
Then cd localetc && ls -la. Now you can see your config.json file
You can edit your config.json with your custom configuration and restart the containers by issuing docker-compose restart
I do it like this when my config.json file is in the same folder than my docker-compose file.
volumes:
- ${PWD}/config.json:/etc/config.json
I noticed that if I use ./config.json it will create a folder named config.json, so I used ${PWD} instead.
Related
I've created a simple example where i'm trying to achive a simple task. I need to mount a specific folder inside my container to a specific folder on the host machine.
The dockerfile that create the image i use in my docker-compose is the following:
FROM ubuntu:bionic
COPY ./file.txt /vol/file.txt
In this folder, on the container side, i copy a file called file.txt, and i need this behaviour on the host machine:
If the Host machine folder is empty or not exising or the file is missing, i need to copy inside that the file.txt file.
If the Host machine folder has already a file with that name, the file is preserved and not overriden.
This is exactly what happens when volume is used inside docker-compose. The problem with this is that you can't choose the folder on the host machine, but you can simply assign to the volume a name and Docker will create it inside one of his folders.
Instead using mount point lets you choose the folder on the host machine, but here the behaviour is different: even if the folder on the host machine is empty or not exising, the file file.txt is deleted on the container.
A pratical example of this is a common application distributed with docker. I start from a dotnet image and copy inside it my application that will start on container load. Inside this application i have a configuration file that i need to preserve on image updates, so i need to copy it on Host machine, and this file has to be editable by the user that use the host machine, so it can't be on a subfolder of docker installation with a random name.
There's a way to achieve this?
I'm using Docker for Windows, with owncloud container, and I shared volumes, setting up on docker-compose.yml, in order to persist the data.
Extract from my docker-compose.yml
` owncloud:
image: jobel/owncloud:1.1
links:
- db:db
ports:
- 8002:80
volumes:
- f:/Program Files/Docker/Docker VM/owncloudconfig:/var/www/html
- f:/Program Files/Docker/Docker VM/owncloudconfig/apps:/var/www/html/apps
- f:/Program Files/Docker/Docker VM/owncloudconfig/config:/var/www/html/config
- f:/Program Files/Docker/Docker VM/owncloudconfig/data:/var/www/html/data
`
Data are present but when I try to connect on my web browser, it says : "Your data folder can be read by other users. Please change the directory permissions in 0770 mode so that the content is not listed by other users." (translate fr->eng)
But how can I change the windows folder permissions to 0770 ? ;(
Or what are the others solutions?
Thank you in advance for your answer.
In the Windows client, you can configure the docker to allow read/write on specific drives. In your case, it is the F: drive. This should fix your permissions problem.
The web-server running in Docker is checking the folder permissions, which is nothing to do with windows; you need to verify the folder permissions inside Docker itself
Below is an example which you can play with, you just need to let docker execute the Linux command in a container
hope this helps
eg:
$ docker container exec -t local-container ls -ld /var/www/html/config
$ docker container exec -t local-container chmod 0770 /var/www/html/config
I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew
Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes
I have troubles setting permissions for a web folder on Elastic Beanstalk. I run multiple containers using custom docker images in one instance: apache-php, mysql, memcached, etc.. For the container "apache-php" I map a folder with my yii2 application to /var/www/html/.
When I manually make a bundle and do upload / deploy via Elastic Beanstalk console I sure have right permissions for the folder and everything works fine.
Now, when I deploy the app using "eb deploy", it drops all permissions and I get a server error and "The directory is not writable by the Web process: /var/www/html/backend/web/assets" in logs.
I can connect via ssh and set necessary permissions manually, but sure this is not convenient, since needs to be done every time I re-deploy the app.
So, my questions is what is the best way to automatically set permission for particular folder in particular container on Elastic Beanstalk?
Perhaps, I can use .ebextensions, but I didn't find how to run "container_commands" for particular container.
AWS EB Deployment starts your app in /var/app/ondeck
When deploying elastic beanstalk, your app is first unzipped into /var/app/ondeck/
Most likely, your local folder being deployed does not have the permissions you want on them.
If you need to make adjustments to your app, or the shell, during deployment, .ebextensions/*.config is the right place to do it.
Container commands should be run to that path
But keep in mind, that these commands will run EVERY time you deploy, whether needed or not, unless you use some method to test for pre-config.
container_commands:
08user_config:
test: test ! -f /opt/elasticbeanstalk/.preconfig-complete
command: |
echo "jail-me" > /home/ec2-user/.userfile
09writable_dirs:
command: |
chmod -R 770 /var/app/ondeck/backend/web/assets
chmod -R 770 /var/app/ondeck/[path]
99complete:
command: |
touch /opt/elasticbeanstalk/.preconfig-complete
files:
"/etc/profile.d/myalias.sh":
mode: "000644"
owner: root
group: root
content: |
alias webroot='cd /var/www/html/backend/web; ls -al --color;'
echo " ========== "
echo " The whole point of Elastic Beanstalk is that you shouldn't need to SSH into the server. "
echo " ========== "
Yes you should use ebextensions.
Create a folder in your app source root called .ebextensions. Create a file with a .config extension say 01-folder-permissions.config. Files are processed in lexicographical order of their name.
Contents of the file can be:
container_commands:
change_permissions:
command: chmod 777 /var/www/some-folder
Replace with appropriate folder and permissions. Read about container commands here.
I am trying to get the Tracks Docker image to work.
When I run the given command docker run -d --name=tracks -p 80:80 staannoe/tracks, everything goes fine. However, if I add a volume with docker run -d --volume /srv/tracks:/var/www --name=tracks -p 80:80 staannoe/tracks then suddenly it breaks: After doing the docker run, when I point my browser to the Tracks URL, all I get is a 404 error.
I noticed that /srv/tracks is always empty as well, unlike /var/www in the volume-less case. Notably, docker logs reveals that when I specify the volume, I get:
AH00112: Warning: DocumentRoot [/var/www/tracks/public] does not exist
I also get this error even if I manually create /srv/tracks/public. What is the problem?
EDIT: I am no longer sure if permissions are the problem. I did sudo chmod 777 /srv/tracks and I still get the same error. I also tried to sudo chgrp 33 /srv/tracks (33 is apparently www-data; by default the directory is owned by root:root) and this still didn't solve it.
The folder /var/www seems to contain data that is needed by your app. When you mount /srv/tracks as a volume to /var/www, its contents get hidden by the contents ov /srv/tracks.
According to your warning message:
AH00112: Warning: DocumentRoot [/var/www/tracks/public] does not exist
Probably you want to do something like this instead and place the tracks folder as a subfolder blow www instead of overwriting the whole www folder:
docker run ... -v /srv/tracks:/var/www/tracks ...
Otherwise: if you need to keep the /srv/www files but use an other folder to replace www you could place the tracks files into a different folder like /bootstrap during the Dockerfile build. And during container startup you simply copy the /bootstrap files to the /srv/www by providing your own little startup script.
EDIT: Make sure you do not mount an empty tracks folder into /var/www/tracks. In that folder there are some files expected. When they are not found you get an HTTP 404. Those folders are required on top level:
$ docker exec -it tracks ls /var/www/tracks
COPYING README.md bin db lib public vendor
Gemfile Rakefile config doc log test
Gemfile.lock app config.ru features mkdocs.yml tmp
Make sure that your custom tasks folder that you use as a volume is based on the original images contents.