I am trying to setup Frontail to access the logs for a Django application deployed using Docker locally. Has anyone done this before? There is very little documentation or other information online.
I found a solution by adding Frontail as another service to the docker-compose.yml file. After pulling the Frontail image from Docker Under your other services, you can add:
services:
...
logs:
image: mthenw/frontail
ports:
- "9001:9001"
command: /var/log/log_filename.log
volumes:
- type: bind
source: path_to_logs_directory
target: /var/log
Where log_filename.log is the filename your Django logger is using and path_to_logs_directory is the relative directory of this file. The command argument is what is passed to Frontail (i.e. the logs file to monitor). Note that there is no real reason I used the /var/log directory in the Frontail image - it just seemed to make sense after looking around the file structure.
The only way I could get this to work was with a bind mount thanks to nonNumericalFloat but I'd be interested to know if there is a better way to do this.
Related
I have a spring-boot application running inside a docker container & its working fine. But the thing is application log file is empty inside the docker container.
In logback-spring.xml log path has been configured to /var/log.
When I go to /var/log directory inside the docker container, I can see log file has been created like "myservice.log"
but when I "cat" the file to see the content, that is completely empty.
Also when I execute
docker logs <container-id>
it returns nothing.
And also I checked the docker root directory in the server.
/apps/docker/containers/<container-id>/<container-id-json.log>
that is also empty.
my Dockerfile has the following structure.
From private-docker-repo/openjdk:11-jre-slim
WORKDIR /opt/services
COPY target/my-service-0.0.1-SNAPSHOT.jar /opt/services/my-service.jar
CMD java -Dspring.profiles.active=dev -Dserver.port=61016 -jar my-service.jar
EXPOSE 61016
What can be the reason for being the log file is empty here. Highly appreciate if anyone can point me out.
Edit - when I deploy the same jar using a linux systemd service logs are just writing fine. I want to know why the same jar not printing any logs inside the docker container
Thanks in advance..!
are you sure that your application running? Get into the docker container and check whether it's running, seems to me it's not started.
I solved this just by replacing the CMD command with ENTRYPOINT. Now the logs are printing just fine. I did some search on the difference between CMD and ENTRYPOINT but still I cant understand how that affect the logging of a container. So if anyone can add a comment what could be happened, that's great not only for me but for the other who will see this question in future.
Thank you :)
I use docker with COMPOSE_PROJECT_NAME to dynamically create container names. Somewhere in the last 2 months something changed and my local machine generates container names with hypens.
e.g project-name-traefik-1 instead of project-name_traefik_1.
How can I change this behavior as this breaks functionality on my linux docker server which for some reason keeps the old container naming structure.
I have the latest Docker Desktop and the latest Docker on the server. I can't find anything in the documentation that points to this change.
Resorted to adding container_name (which wasn't previously needed).
I'm also using the COMPOSE_PROJECT_NAME environment variable to get predictable container names.
E.g.
...
nginx:
image: graffino/nginx:v1
restart: unless-stopped
container_name: ${COMPOSE_PROJECT_NAME}-nginx
...
so I have a static files (web app) running on container1, and a node js app that's running on container2, I want the node app the have writing access to the static files in container1. how can I achieve this?
what i tried so far :
docker compose, but it only allow for communication between container (network access), not sharing the same filesystem. Therefore, node can't access files on C1.
A way to do it is docker-compose volume
An example configuration yaml file for docker-compose v3 will be as below.
/share in host-os file system will be shared across these 2 containers
version: "3"
services:
webapp:
image: webapp:1.0
volumes:
- /share:/share
nodeapp:
image: nodeapp:1.0
volumes:
- /share:/share
Using a simple HTTP server (a simple node one can be found here) on one of the containers will allow you to host the static files. Then, this can be accessed from the other containers using the network all your containers are on.
Another option would be to mount a volume to both your containers. Any changes made via one container would reflect in the other if the same volume is mounted. More info can be found here.
I want to write log files to the host file system, so it is persisted, even if the Docker container dies.
Do I need to mount a volume in my Docker yaml?
VOLUME /var/log/myApp
Then do I just reference the mount like this?
var stream = fs.createWriteStream(`/var/log/myApp/myLog.log`);
stream.write('Hello World!');
Then outside of my container, I can go to the /var/log/myApp/ directory and see my logs.
I am trying to find an example of this, but haven't seen anything.
When you're setting up your container, you just use the -v argument:
-v ./path/to/local/directory:/var/log/myApp
The first path is where the volume is available on the host system (the period at the beginning means it's relative to where you're running the docker command). The path on the right hand side is where it's available in the container.
Once more, in docker-compose:
volumes:
- "./path/to/local/directory:/var/log/myApp"
And yes, this will allow the data stored in the volume to be persistent.
I have a docker compose file that looks like this
version: '3'
services:
webapp:
build: '.'
ports:
- "8000:8000"
networks:
- db
postgres:
image: "postgres:alpine"
environment:
POSTGRES_PASSWORD: "password"
volumes:
- "./scripts:/docker-entrypoint-initdb.d"
networks:
- db
networks:
db:
The scripts folder looks like this:
|- scripts
|-- init.sh
|-- init.sql
The Problem
My workflow for this project is progressive, so I add some SQL initialization data on my host OS, run sudo docker-compose down -v and then sudo docker-compose up. I did not update my user to not need the use of sudo for this scenario.
When I update the init.sh file, then these updates are reflected each time I run docker-compose up. The init.sql file however, only remembers the first "version" of this file. Any subsequent updates are ignored when running docker-compose up.
Things I tried
Tried sudo docker-compose up --renew-anon-volumes --force-recreate which also does not seem to help.
Tried pruning all the volumes with sudo docker volume prune. Does not help
Tried pruning the docker system with sudo docker system prune
What does work is if I copy the file and it's content to a new file name. Renaming the file does not work
So the question is simply, how do I get content updates of init.sql to be recognized by my docker compose setup?? I don't understand why changes to init.sh is picked up but changes to init.sql are ignored?
UPDATE
One important piece of information is that the project is sitting on a virtualbox shared folder, so the underlying file system is vboxsf while all of this is happening.
So it turns out that the underlying file system is playing a role here when using Docker Volumes. I have been using a virtualbox vm and the project was sitting on a vboxsf file system. So when attaching a volume in my docker compose scenario(?), it has been attaching to a vboxsf volume this whole time.
When I moved the project from the vboxsf filesystem to something else (whatever my home directory filesystem has, ext4 I think) then updates to the files worked as expected.
-----------I speak under correction here, link is important to track--------------
My understanding is that the way vboxsf works is that changes are broadcasted between host and guest filesystems and this is picked up by the host and guest OS. There is also an aspect about how shared memory is accessed, but I really don't have that kind of knowledge to elaborate on it further.
To understand the issue, this link seems to be the best resource for now:
https://www.virtualbox.org/ticket/819?cversion=0&cnum_hist=70
-------------------End----------------------------------
I don't think that this will be a problem in production, but it will definitely make you question your sanity for local development.
So please, when you are using a Linux VM for development, check which filesystem your Docker Volumes are using before you even start working on a project.
There is no error messages at all, which is the one of the worst circumstances to be in when debugging this problem!!
I also wasted about 2 days worth of life trying to figure out what's going on and how to fix it. Hopefully this 2 wasted days can result in many days saved instead :D