For example in Node.js container I do:
throw new Error('lol'); or console.error('lol');
But when I open container logs: docker-compose logs -f nodejs
there are no any statuses or colors like all logs have info status.
I use Datadog to collect logs from container - it also mark all logs as 'info'.
docker logs and similar just collect the stdout and stderr streams from the main process running inside the container. There's not a "log level" associated with that, though some systems might treat or highlight the two streams differently.
As a basic example, you could run
docker run -d --name lister --rm busybox ls /
docker logs lister
The resulting file listing isn't especially "error" or "debug" level.
The production-oriented setups I'm used to include the log level in log messages (in a Node context, I've used the Winston logging library), and then use a tool like fluentd to collect and parse those messages.
Related
Im running a docker container (springboot application running inside) with mounting a location for the logs.
docker run -d --name myContainer 8080:80 -v /server/appLogs:/var/log myContainer:latest
Here im mounting /server/appLogs to the location /var/log/. my spring boot application logs are written inside the /var/log location and I need to take logs out to the host machine.
But with the time log are getting collected inside /server/appLogs location and its filling all my space in the server.
I know from logback-spring.xml file we can handle the max size and max history but those settings doesnt applies for the mounted location.
I have a plan to write a shell script and add a cron job for auto deleting the logs in the location.
Is there any other good method to clear the logs in this /server/appLogs location?
why the configurations in logback-spring.xml doesn't get applies here?
You can use ELK (https://www.elastic.co/) for application logs.
this is the complete guide for your ref: https://logz.io/learn/complete-guide-elk-stack/#elasticsearch
I have a Java application that uses log4j2. Upon running the application it creates the log files in the logs folder and writes debug statements into the log file.
However when I create a Docker Image and run, though I see the logs folder getting created inside the container, and log printed in the file. but when i run docker log command so i can't see any logs.
I have several modules and the corresponding log4j file but when I am running the docker log command then all logs are not getting printed in the docker container while I want to print all logs in the docker container.
I am trying to configure my php errors to output to docker logs. All documentation I have read indicates that docker logs are tied to the containers stdout and stderr which come from /proc/self/fd/1 and /proc/self/fd/2. I created a symlink from my php error log file /var/log/php_errors.log to /proc/self/fd/1 with command:
ln -sf /proc/self/fd/1 /var/log/php_errors.log
After linking the error log I have tested its functionality by running this php script:
<?php
error_log("This is a custom error message constructed to test the php error logging functionality of this site.\n");
?>
The output echos the error message to the console so I can see that php error logging is now redirected to stdout in the container, but when I run docker logs -f <containername> I never see the error message in the logs. Also echoing from inside the container doesn't show in the logs either which is confusing because my understanding is the echo command is stdout.
Further reading informed me that docker logs will only show output from pid 1 which could be the issue. If this is the case how can I correctly configure my php error logging to show in docker logs outside the container.
Also I have checked that I am using the default json-file docker driver, and have tried this both on my local environment and a web server.
I have Default.vcl with host and port and copied to container varnish:/etc/varnish
I am able to hit back end via varnish, but could not trace varnish logs using docker logs
Logs in Varnish are not produced by the main varnishd process.
You can either use the varnishlog binary to get in-depth logs
Or you can use the varnishncsa binary to get Apache-style logs
You have to run either of these commands within your container, on the shell. Unfortunately this cannot be done through docker logs.
Thijs's answer is correct, I would also recommend looking at the varnishncsa service in the package to understand how you can run it from the same container, and this blog post to understand what needs to be shared between the two containers if you decide to split them.
I'm using docker/elk image to display my data in kibana dashboard (Version 6.6.0) and It works pretty good. I started the service like using below command.
Docker Image git repo:
https://github.com/caas/docker-elk
Command:
sudo docker-compose up --detach
Expecting that it will run background, and did as expected. After two days the server up and running the and third day the kibana alone getting stopped. and Used below command to make it up and running.
sudo docker run -d <Docer_image_name>
It's up and running when I use docker ps command. But when I tried to hit the kibana server in chrome browser it says not reachable.
So I just used to below command to restart the service.
sudo docker-compose down
After that I can see kibana server in chrome browser which is up and running but I do see all my data is lost.
I used below URL in jenkins to collect the data.
`http://hostname:9200/ecdpipe_builds/extern`al
Any idea how can I resolve this issue?
I did not see the persistent storage configuration the image you mentioned in their GitHub docker-compose file.
This is common to lost data in case of docker container if you did not provide persistent storage configuration. so docker-compose down may cause to lost you data if there is no persistent configuration docker-compose file.
Persisting log data
In order to keep log data across container restarts, this image mounts
/var/lib/elasticsearch — which is the directory that Elasticsearch
stores its data in — as a volume.
You may however want to use a dedicated data volume to persist this
log data, for instance to facilitate back-up and restore operations.
One way to do this is to mount a Docker named volume using docker's -v
option, as in:
$ sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 \
-v elk-data:/var/lib/elasticsearch --name elk sebp/elk
This command mounts the named volume elk-data to
/var/lib/elasticsearch (and automatically creates the volume if it
doesn't exist; you could also pre-create it manually using docker
volume create elk-data).
So you can set these paths in your docker-compose file accordingly. Here is the link that you can check elk-docker-persisting-log-data
Use docker volume or file location as persistant space