Where to place app config/logs in container - linux

I've got a python package running in a container.
Is it best practice to install it in /opt/myapp within the container?
Should the logs go in /var/opt/myapp?
Should the config files go in /etc/opt/myapp?
Is anyone recommending writing logs and config files to /opt/myapp/var/log and /opt/myapp/config?
I notice google chrome was installed in /opt/google/chrome on my (host) system, but it didn't place any configs in /etc/opt/...

Is it best practice to install it in /opt/myapp within the container?
I place my apps in my container images in /app. So in the dockerfile I do
WORKDIR /app at the beginning
Should the logs go in /var/opt/myapp?
In container world the best practice is that your application logs go into stdout, stderr and not into files inside the container because containers are ephemeral by design and should be treated that way so when a container is stopped and deleted all of its data on its filesystem is gone.
On local docker development environment you can see the logs with docker logs and you can:
start a container named gettingstarted from the image docker/getting-started:
docker run --name gettingstarted -d -p 80:80 docker/getting-started
redirect docker logs output to a local file on the docker client (your machine from where you run the docker commands):
docker logs -f gettingstarted &> gettingstarted.log &
open http://localhost to generate some logs
read the log file with tail realtime or with any text viewer program:
tail -f gettingstarted.log
Should the config files go in /etc/opt/myapp?
Again, you can put the config files anywhere you want, I like to keep them together with my app so in the /app directory, but you should not modify the config files once the container is running. What you should do is instead pass the config variables to the container as environment variables at startup with the -e flag, for example to create MYVAR variable with MYVALUE value inside the container start it this way:
docker run --name gettingstarted -d -p 80:80 -e MYVAR='MYVALUE' docker/getting-started
exec into the container to see the variable:
docker exec -it gettingstarted sh
/ # echo $MYVAR
MYVALUE
From here it is the responsibility of your containerized app to understand these variables and translate them to actual application configurations. Some/most programming languages support reaching env vars from inside the code at runtime but if this is not an option then you can do an entrypoint.sh script that updates the config files with the values supplied through the env vars. A good example for this is the postgresql entrypoint: https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh
Is anyone recommending writing logs and config files to
/opt/myapp/var/log and /opt/myapp/config?
As you can see, it is not recommended to write logs into the filesystem of the container you would rather have a solution to save them outside of the container if you need them persisted.
If you understand and follow this mindset especially that containers are ephemeral then it will be much easier for you to transition from the local docker development to production ready kubernetes infrastructures.

Docker is Linux, so almost all of your concerns are related to the best operative system in the world: Linux
Installation folder
This will help you:
Where to install programs on Linux?
Where should I put software I compile myself?
and this: Linux File Hierarchy Structure
As a summary, in Linux you could use any folder for your apps, bearing in mind:
Don't use system folders : /bin /usr/bin /boot /proc /lib
Don't use file system folder: /media / mnt
Don't use /tmp folder because it's content is deleted on each restart
As you researched, you could imitate chrome and use /opt
You could create your own folder like /acme if there are several developers entering to the machine, so you could tell them: "No matter the machine or the application, all the custom content of our company will be in /acme". Also this help you if you are a security paranoid because will be able to guess where your application is. Any way, if the devil has access to your machine, is just a matter of time to find all.
You could use fine grained permissions to keep safe the chosen folder
Log Folder
Similar to the previous paragraph:
You could store your logs the standard /var/log/acme.log
Or create your own company standard
/acme/log/api.log
/acme/webs/web1/app.log
Config Folder
This is the key for devops.
In a traditional, ancient and manually deployments, some folders were used to store the apps configurations like:
/etc
$HOME/.acme/settings.json
But in the modern epoch and if you are using Docker, you should not store manually your settings inside of container or in the host. The best way to have just one build and deploy n times (dev, test, staging, uat, prod, etc) is using environment variables.
One build , n deploys and env variables usage are fundamental for devops and cloud applications, Check the famous https://12factor.net/
III. Config: Store config in the environment
V. Build, release, run: Strictly separate build and run stages
And also is a good practice on any language. Check this Heroku: Configuration and Config Vars
So your python app should not read or expect a file in the filesystem to load its configurations. Maybe for dev, but no for test and prod.
Your python should read its configurations from env variables
import os
print(os.environ['DATABASE_PASSWORD'])
And then inject these values at runtime:
docker run -it -p 8080:80 -e DATABASE_PASSWORD=changeme my_python_app
And in your developer localhost,
export DATABASE_PASSWORD=changeme
python myapp.py
Before the run of your application and in the same shell
Config of a lot pf apps
The previous approach is an option for a couple of apps. But if you are driven to microservices and microfrontends, you will have dozens of apps on several languages. So in this case, to centralize the configurations you could use:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Or the Configurator (I'm the author)

Related

The usage of Docker base images of Azure Functions

I'm new to both Docker and Azure Functions so it must be a silly question...
You can pull the images of Azure Functions from Docker Hub, like:
docker pull mcr.microsoft.com/azure-functions/node:3.0-node12
Now I pulled the image of a specific runtime of Azure Functions, but what can I do with this exactly?
First I thought I could find Azure Functions Core Tools inside of the container, then found the azure-function-host directory with bunch of files, but I'm not sure what it is.
docker exec -it "TheContainerMadeOfAzureFunctionsImage" bash
-> FuncExtensionBundles azure-functions-host bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Thank you in advance.
You can install the remote development extension tools for VSCode and the Azure Functions extension.
Create your local folder then using the remote development tools, open that folder inside a container from the command pallette by selecting 'Reopen In Container'
Reopen In Container Image
Then select your definition.
Remote Dev Tools Image
This actually use those base images you mentioned.
It will create a .devcontainer hidden directory in your repo where it stores the container information and saves you having to install the Function Core tools/NPM or anything else on your local machine.
It automatically forwards the required ports for local debugging and you can push the devcontainer definitions to source control so that others can use your definition with the project.
Last week I solved it myself. I found the exact image in Docker Hub, then docker pull mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools and that's it.
You can find a full list of available tags for each runtime.
In container you can run both Azure Functions Core Tools and a language runtime (like Node.js or Python, etc.) and of course you can create function apps.
With port-forwarding like docker run -it -p 8080:7071 --name container1 mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools bash you can debug your functions running inside a container (which uses port 7071) from your local machine, by sending HTTP requests to localhost:8080. This is somewhat brute force but I'm happy.

how to access the file system of my local PC from within the docker container

I want to start the following docker container and have terminal access to it:
docker run -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
The problem is that inside the terminal I can not find any of the files in my file system. No ~/Desktop and similar directories.
Question: how to access the file system of my local PC from within the docker container?
By default, containers cannot see the file system of their host.
If you want to achieve this, you will have to explicitly "mount" whatever directories you want to see using the -v flag, like this:
docker run -v ~/Desktop:/host-desktop -it docker:5000/builds/build-lnx64-centos7:latest /bin/bash
If you run that command, you will see the contents of your desktop in the container's file system, at /host-desktop.
You really would not want your container's to be able to see the entire host file system. That would be dangerous, especially if the container has write permission. You should always only "mount" the exact files/directories you want the container to access.
For the most part, any project I have worked on that uses docker does "volume mounting" so that the container can write files and the developer can easily access them on the host (e.g. selenium tests taking screenshots) or so the developer can edit source code and the container will see the update and hot-reload (e.g. nodejs development). When doing the latter (hot-reload example), it is usually wise to mount in read-only mode.
See the docs for more details: https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only

Execute script located inside linked volume on host environment [duplicate]

Can I run docker command on host? I installed aws inside my docker container, now can I somehow use aws command on host (that under the hood will use docker container's aws)?
My situation is like that: I have database backups on production host. now I have Jenkins cron job that will take sql file from db container and take it into server folder. Now I also want jenkins to upload this backup file on AWS storage, but on host I have no aws installed, also I don't want to install anything except docker on my host, so I think aws should be installed inside container.
You can't directly do this. Docker containers and images have isolated filesystems, and the host and containers can't directly access each others' filesystems and binaries.
In theory you could write a shell script that wrapped docker run, name it aws, and put it in your $PATH
#!/bin/sh
exec docker run --rm -it awscli aws "$#"
but this doesn't scale well, requires you to have root-level permissions on the host, and you won't be able to access files on the host (like ~/.aws/config) or environment variables (like $AWS_ACCESS_KEY_ID) with additional setup.
You can just install software on your host instead, and it will work normally. There's no requirement to use Docker for absolutely everything.

Live reload Node.js dev environment with Docker

I'm trying to work on a dev environment with Node.js and Docker.
I want to be able to:
run my docker container when I boot my computer once and for all;
make changes in my local source code and see the changes without interacting with the docker container (with a mount).
I've tried the Node image and, if I understand correctly, it is not what I'm looking for.
I know how to make the mount point, but I'm missing how the server is supposed to detect the changes and "relaunch" itself.
I'm new to Node.js so if there is a better way to do things, feel free to share.
run my docker container when I boot my computer once and for all;
start containers automatically with the docker daemon or with your process manager
make changes in my local source code and see the changes without
interacting with the docker container (with a mount).
You need to mount your dev app folder as a volume
$ docker run --name myapp -v /app/src:/app image/app
and set in your Dockerfile nodeJs
CMD ["nodemon", "-L", "/app"]

Managing directory permissions across the host and Docker container environments

I'm trying to use a stack built with Docker container to run a Symfony2 application (SfDocker). The stack consists of interlinked containers where ubuntu:14.04 is a base:
mysql db
nginx
php-fpm
The recurring problem that I'm facing is managing directory permission inside the container. When I mount a vloume from the host, e.g.
volumes:
- symfony-code:/var/www/app
The mounted directories will always be owned by root or an unidentified user (only user ID visible when running ls -al) inside the container.
This, essentially, makes it impossible to access the application through the browser. Of course running chown -R root:www-data on public directories solves the problem, but as soon as I want to write to e.g. 'cache' directory as from the host (where the user is ltarasiewicz) I'd get permission denied error. On top of that, whenever an application running inside a container creates new directories (e.h. 'logs'), they again are owned byroot and later inaccessible by the browser or my desktop user.
So my question are:
How I should manage permission accross the host and container
environments (when I want to run commands on the container from both
environments) ?
Is it possible to configure Docker so that directories mounted as volumes receive specific ownership/permissions (e.g. 'root:www-data') automatically?
Am I free to create new users and user groups inside my 'nginx' container built from the Ubuntu:14.04 image ?
A few general points, apologies if I don't answer your questions directly.
Don't run as root in the container. Create a user in the Dockerfile and switch to it, either with the USER statement or in an entrypoint or command script. See the Redis official image for a good example of this. (So the answer to Q3 is yes, and do, but via a Dockerfile - don't make changes to containers by hand).
Note that the official images often do a chown on volumes in the entrypoint script to avoid this issue you describe in 2.
Consider using a data container rather than linking directly to host directories. See the official docs for more information.
Don't run commands from the host on the volumes. Just create a temporary container to do it or use docker exec (e.g. docker run -v /myvol:/myvol myimage touch /myvol/x).

Resources