how to set timezone of 'docker logs -t'? - linux

My local timezone and docker container's timezone are all set to 'GMT+8:00'. But the 'docker logs -t' still shows timestamp of 'GMT+0:00'.
the picture below is a part of output of 'docker logs -t'. The left timestamp is printed by docker, and the right timestamp is printed by application in container.

After some research, I found out that the docker logs -t command prints out timestamps in UTC and there is no config to change that. However, you could use a little script referenced in https://github.com/docker/cli/issues/604, where you could just pipe the output and change the given timestamp.

Related

How to use --since option with docker logs command

I want to look at last 1 hour of docker container log using docker logs --since option. Which value I should provide for --since parameter?
as the help says
--since string Show logs since timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes
I would do
docker logs mycontainer_or_id --since 60m
This syntax is correct according to my active container
Please refer to the Docker docs.
docker logs --since 1h
The --since option shows only the container logs generated after a given date. You can specify the date as an RFC 3339 date, a UNIX timestamp, or a Go duration string (e.g. 1m30s, 3h). Besides RFC3339 date format you may also use RFC3339Nano, 2006-01-02T15:04:05, 2006-01-02T15:04:05.999999999, 2006-01-02Z07:00, and 2006-01-02.
You may want logs from a specific date, but docker might not like your date's format.
In such cases, check whether the UNIX date command parse it:
$ date -d "your date here"
Wed Oct 5 12:46:17 GMT 2022
If date's output looks right, then you can use date -I to produce a format that docker understands.
$ docker logs my_container --since "$(date -I -d "your date here")" | less -RX

Running tests in a container on Travis

While building my application on Travis I am trying to run the tests within a Docker container. The container starts and the tests are run, and when I log the container output I can see they have passed. It is my understanding I can use grep for this as seen below. So this is my travis script:
script:
docker-compose up -d
docker logs dockertestapp_app_1
docker logs 2>&1 dockertestapp_app_1 | grep -q 'npm info ok'
I just want to grep the output of the container logs to see whether or not the tests pass but it always fails. Am I missing something simple?
Thank you in advance!
In order to avoid a sleep of 60 seconds you described in your comment, start your tests manually doing something like this:
docker exec -it dockertestapp_app_1 bash -c 'tests.py > /proc/1/fd/1'
Note I'm executing a test file (in this example, tests.py) and setting output to /proc/1/fd/1. This way you can normally grep the expression that means your tests passed as you are currently doing.
TIP: you may not need to output to /proc/1/fd/1 for docker logs as your test script may return a non-zero exit code to indicate that tests failed. This way you don't even need the grep line in your script.

Cron / wget jobs intermittently not running - not getting into access log

I've a number of accounts running cron-started php jobs hourly.
The generic structure of the command is this:
wget -q -O - http://some.site.com/cron.php
Now, this used to be running just fine.
Lately, though, on a number of accounts it has started playing up - but only on this one server. Once or twice a day the php file is not run.
The access log is missing the relevant entry.
While the cron log shows that the job was run.
We've added a bit to the command to log things out (-o /tmp/logfile) but it shows nothing.
I'm at a loss, really. I'm looking for ideas what can be wrong, or how to sidestep this issue as it has started taking up way too much of my time.
Has anyone seen anything remotely like this?
Thanks in advance!
Try this command
wget -d -a /tmp/logfile -O - http://some.site.com/cron.php
With -q you turn off wget's output. With -d you turn on debug output (maybe -v for verbose output is already enough). With -a you append logging messages to /tmp/logfile instead of always creating a new file.
You can also use curl:
curl http://some.site.com/cron.php

docker attach vs lxc-attach

UPDATE: Docker 0.9.0 use libcontainer now, diverting from LXC see: Attaching process to Docker libcontainer container
I'm running an istance of elasticsearch:
docker run -d -p 9200:9200 -p 9300:9300 dockerfile/elasticsearch
Checking the process it show like the following:
$ docker ps --no-trunc
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
49fdccefe4c8c72750d8155bbddad3acd8f573bf13926dcaab53c38672a62f22 dockerfile/elasticsearch:latest /usr/share/elasticsearch/bin/elasticsearch java About an hour ago Up 8 minutes 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp pensive_morse
Now, when I try to attach the running container, I get stacked:
$ sudo docker attach 49fdccefe4c8c72750d8155bbddad3acd8f573bf13926dcaab53c38672a62f22
[sudo] password for lsoave:
the tty doesn't connect and the prompt is not back. Doing the same with lxc-attach works fine:
$ sudo lxc-attach -n 49fdccefe4c8c72750d8155bbddad3acd8f573bf13926dcaab53c38672a62f22
root#49fdccefe4c8:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 49 20:37 ? 00:00:20 /usr/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMa
root 88 0 0 20:38 ? 00:00:00 /bin/bash
root 92 88 0 20:38 ? 00:00:00 ps -ef
root#49fdccefe4c8:/#
Does anybody know what's wrong with docker attach ?
NB. dockerfile/elasticsearch ends with:
ENTRYPOINT ["/usr/share/elasticsearch/bin/elasticsearch"]
You're attaching to a container that is running elasticsearch which isn't an interactive command. You don't get a shell to type in because the container is not running a shell. The reason lxc-attach works is because it's giving you a default shell. Per man lxc-attach:
If no command is specified, the current default shell of the user
running lxc-attach will be looked up inside the container and
executed. This will fail if no such user exists inside the container
or the container does not have a working nsswitch mechanism.
docker attach is behaving as expected.
As Ben Whaley notes this is expected behavior.
It's worth mentioning though that if you want to monitor the process you can do a number of things:
Start bash as front process: e.g. $ES_DIR/bin/elasticsearch && /bin/bash will give you your shell when you attach. Mainly useful during development. Not so clean :)
Install an ssh server. Although I've never done this myself it's a good option. Drawback is of course overhead, and maybe a security angle. Do you really want ssh on all of your containers? Personally, I like to keep them as small as possible with single-process as the ultimate win.
Use the log files! You can use docker cp to get the logs locally, or better the docker logs $CONTAINER_ID command. The latter give you the accumulated stdin/stderr output for the entre lifetime of the container each time though.
Mount the log directory. Just mount a directory on your host and have elasticsearch write to a logfile in that directory. You can have syslog on your host, Logstash, or whatever turns you on ;). Of course, the drawback here is that you are now using your host more than you might like. I also found a nice experiment using logstash in this blog.
FWIW, now that Docker 1.3 is released, you can use "docker exec" to open up a shell or other process on a running container. This should allow you to effectively replace lxc-attach when using the native driver.
http://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/

RRD print the timestamp of the last valid data

I have a rdd database storing ping response from a wide range of network equipments
How can i print on the graph the timestamp of the last valid entry in the rrd database, so i can see if a host is down when did it went down
I use the folowing to creade the RRD file.
rrdtool create terminal_1.rrd -s 60 \
DS:ping:GAUGE:120:0:65535 \
RRA:AVERAGE:0.5:1:2880
Use the lastupdate option of rrdtool.
Another solution exists if you only have one file per host : don't update your RRD if the host is down. You can then see the last updated time with a plain ls or stat as in :
ls -l terminal_1.rrd
stat --format %Y terminal_1.rrd
In case you plan to use the caching daemon of RRD, you have to use the last command in order to flush the pending updates.
rrdtool last terminal_1.rrd

Resources